10th Annual Systems Engineering Conference: Volume 2 Wednesday
2007-10-25
intelligently optimize resource performance. Self - Healing Detect hardware/software failures and reconfigure to permit continued operations. Self ...Types Wake Ice WEAPON/PLATFORM ACOUSTICS Self -Noise Radiated Noise Beam Forming Pulse Types Submarines, surface ships, and platform sensors P r o p P r o...Computing Self -Protecting Detect internal/external attacks and protect it’s resources from exploitation. Self -Optimizing Detect sub-optimal behaviors and
Nelson, Kurtis; Steinwand, Daniel R.
2015-01-01
Annual disturbance maps are produced by the LANDFIRE program across the conterminous United States (CONUS). Existing LANDFIRE disturbance data from 1999 to 2010 are available and current efforts will produce disturbance data through 2012. A tiling and compositing approach was developed to produce bi-annual images optimized for change detection. A tiled grid of 10,000 × 10,000 30 m pixels was defined for CONUS and adjusted to consolidate smaller tiles along national borders, resulting in 98 non-overlapping tiles. Data from Landsat-5,-7, and -8 were re-projected to the tile extents, masked to remove clouds, shadows, water, and snow/ice, then composited using a cosine similarity approach. The resultant images were used in a change detection algorithm to determine areas of vegetation change. This approach enabled more efficient processing compared to using single Landsat scenes, by taking advantage of overlap between adjacent paths, and allowed an automated system to be developed for the entire process.
Optimizing Precipitation Thresholds for Best Correlation Between Dry Lightning and Wildfires
NASA Astrophysics Data System (ADS)
Vant-Hull, Brian; Thompson, Tollisha; Koshak, William
2018-03-01
This work examines how to adjust the definition of "dry lightning" in order to optimize the correlation between dry lightning flash count and the climatology of large (>400 km2) lightning-ignited wildfires over the contiguous United States (CONUS). The National Lightning Detection Network™ and National Centers for Environmental Prediction Stage IV radar-based, gauge-adjusted precipitation data are used to form climatic data sets. For a 13 year analysis period over CONUS, a correlation of 0.88 is found between annual totals of wildfires and dry lightning. This optimal correlation is found by defining dry lightning as follows: on a 0.1° hourly grid, a precipitation threshold of no more than 0.3 mm may accumulate during any hour over a period of 3-4 days preceding the flash. Regional optimized definitions vary. When annual totals are analyzed as done here, no clear advantage is found by weighting positive polarity cloud-to-ground (+CG) lightning differently than -CG lightning. The high variability of dry lightning relative to the precipitation and lightning from which it is derived suggests it would be an independent and useful climate indicator.
Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.
1985-01-01
Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.
Effect of inter- and intra-annual thermohaline variability on acoustic propagation
NASA Astrophysics Data System (ADS)
Chu, Peter C.; McDonald, Colleen M.; Kucukosmanoglu, Murat; Judono, Albert; Margolina, Tetyana; Fan, Chenwu
2017-05-01
This paper is to answer the question "How can inter- and intra-annual variability in the ocean be leveraged by the submarine Force?" through quantifying inter- and intra-annual variability in (T, S) fields and in turn underwater acoustic characteristics such as transmission loss, signal excess, and range of detection. The Navy's Generalized Digital Environmental Model (GDEM) is the climatological monthly mean data and represents mean annual variability. An optimal spectral decomposition method is used to produce a synoptic monthly gridded (SMG) (T, S) dataset for the world oceans with 1° ×1° horizontal resolution, 28 vertical levels (surface to 3,000 m depth), monthly time increment from January 1945 to December 2014 now available at the NOAA/NCEI website: http://data.nodc.noaa.gov/cgibin/iso?id=gov.noaa.nodc:0140938. The sound velocity decreases from 1945 to 1975 and increases afterwards due to global climate change. Effect of the inter- and intra-annual (T, S) variability on acoustic propagation in the Yellow Sea is investigated using a well-developed acoustic model (Bellhop) in frequencies from 3.5 kHz to 5 kHz with sound velocity profile (SVP) calculated from GDEM and SMG datasets, various bottom types (silty clay, fine sand, gravelly mud, sandy mud, and cobble or gravel) from the NAVOCEANO`s High Frequency Environmental Algorithms (HFEVA), source and receiver depths. Acoustic propagation ranges are extended drastically due to the inter-annual variability in comparison with the climatological SVP (from GDEM). Submarines' vulnerability of detection as its depth varies and avoidance of short acoustic range due to inter-annual variability are also discussed.
Kang, Seongmin; Cha, Jae Hyung; Hong, Yoon-Jung; Lee, Daekyeom; Kim, Ki-Hyun; Jeon, Eui-Chan
2018-01-01
This study estimates the optimum sampling cycle using a statistical method for biomass fraction. More than ten samples were collected from each of the three municipal solid waste (MSW) facilities between June 2013 and March 2015 and the biomass fraction was analyzed. The analysis data were grouped into monthly, quarterly, semi-annual, and annual intervals and the optimum sampling cycle for the detection of the biomass fraction was estimated. Biomass fraction data did not show a normal distribution. Therefore, the non-parametric Kruskal-Wallis test was applied to compare the average values for each sample group. The Kruskal-Wallis test results showed that the average monthly, quarterly, semi-annual, and annual values for all three MSW incineration facilities were equal. Therefore, the biomass fraction at the MSW incineration facilities should be calculated on a yearly cycle which is the longest period of the temporal cycles tested. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Schneider, M.; Hase, F.; Blumenstock, T.
2006-10-01
We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.
NASA Astrophysics Data System (ADS)
Schneider, M.; Hase, F.; Blumenstock, T.
2006-06-01
We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.
2015-09-30
DISTRIBUTION STATEMENT A: Distribution approved for public release; distribution is unlimited. NPS-NRL- Rice -UIUC Collaboration on Navy Atmosphere...portability. There is still a gap in the OCCA support for Fortran programmers who do not have accelerator experience. Activities at Rice /Virginia Tech are...for automated data movement and for kernel optimization using source code analysis and run-time detective work. In this quarter the Rice /Virginia
Climate change and the detection of trends in annual runoff
McCabe, G.J.; Wolock, D.M.
1997-01-01
This study examines the statistical likelihood of detecting a trend in annual runoff given an assumed change in mean annual runoff, the underlying year-to-year variability in runoff, and serial correlation of annual runoff. Means, standard deviations, and lag-1 serial correlations of annual runoff were computed for 585 stream gages in the conterminous United States, and these statistics were used to compute the probability of detecting a prescribed trend in annual runoff. Assuming a linear 20% change in mean annual runoff over a 100 yr period and a significance level of 95%, the average probability of detecting a significant trend was 28% among the 585 stream gages. The largest probability of detecting a trend was in the northwestern U.S., the Great Lakes region, the northeastern U.S., the Appalachian Mountains, and parts of the northern Rocky Mountains. The smallest probability of trend detection was in the central and southwestern U.S., and in Florida. Low probabilities of trend detection were associated with low ratios of mean annual runoff to the standard deviation of annual runoff and with high lag-1 serial correlation in the data.
Wang, Hongguang
2018-01-01
Annual power load forecasting is not only the premise of formulating reasonable macro power planning, but also an important guarantee for the safety and economic operation of power system. In view of the characteristics of annual power load forecasting, the grey model of GM (1,1) are widely applied. Introducing buffer operator into GM (1,1) to pre-process the historical annual power load data is an approach to improve the forecasting accuracy. To solve the problem of nonadjustable action intensity of traditional weakening buffer operator, variable-weight weakening buffer operator (VWWBO) and background value optimization (BVO) are used to dynamically pre-process the historical annual power load data and a VWWBO-BVO-based GM (1,1) is proposed. To find the optimal value of variable-weight buffer coefficient and background value weight generating coefficient of the proposed model, grey relational analysis (GRA) and improved gravitational search algorithm (IGSA) are integrated and a GRA-IGSA integration algorithm is constructed aiming to maximize the grey relativity between simulating value sequence and actual value sequence. By the adjustable action intensity of buffer operator, the proposed model optimized by GRA-IGSA integration algorithm can obtain a better forecasting accuracy which is demonstrated by the case studies and can provide an optimized solution for annual power load forecasting. PMID:29768450
The Optimal Forest Rotation: A Discussion and Annotated Bibliography
David H. Newman
1988-01-01
The literature contains six different criteria of the optimal forest rotation: (1) maximum single-rotation physical yield, (2) maximum single-rotation annual yield, (3) maximum single-rotation discounted net revenues, (4) maximum discounted net revenues from an infinite series of rotations, (5) maximum annual net revenues, and (6) maximum internal rate of return. First...
Optimizing model: insemination, replacement, seasonal production, and cash flow.
DeLorenzo, M A; Spreen, T H; Bryan, G R; Beede, D K; Van Arendonk, J A
1992-03-01
Dynamic programming to solve the Markov decision process problem of optimal insemination and replacement decisions was adapted to address large dairy herd management decision problems in the US. Expected net present values of cow states (151,200) were used to determine the optimal policy. States were specified by class of parity (n = 12), production level (n = 15), month of calving (n = 12), month of lactation (n = 16), and days open (n = 7). Methodology optimized decisions based on net present value of an individual cow and all replacements over a 20-yr decision horizon. Length of decision horizon was chosen to ensure that optimal policies were determined for an infinite planning horizon. Optimization took 286 s of central processing unit time. The final probability transition matrix was determined, in part, by the optimal policy. It was estimated iteratively to determine post-optimization steady state herd structure, milk production, replacement, feed inputs and costs, and resulting cash flow on a calendar month and annual basis if optimal policies were implemented. Implementation of the model included seasonal effects on lactation curve shapes, estrus detection rates, pregnancy rates, milk prices, replacement costs, cull prices, and genetic progress. Other inputs included calf values, values of dietary TDN and CP per kilogram, and discount rate. Stochastic elements included conception (and, thus, subsequent freshening), cow milk production level within herd, and survival. Validation of optimized solutions was by separate simulation model, which implemented policies on a simulated herd and also described herd dynamics during transition to optimized structure.
Addressing forecast uncertainty impact on CSP annual performance
NASA Astrophysics Data System (ADS)
Ferretti, Fabio; Hogendijk, Christopher; Aga, Vipluv; Ehrsam, Andreas
2017-06-01
This work analyzes the impact of weather forecast uncertainty on the annual performance of a Concentrated Solar Power (CSP) plant. Forecast time series has been produced by a commercial forecast provider using the technique of hindcasting for the full year 2011 in hourly resolution for Ouarzazate, Morocco. Impact of forecast uncertainty has been measured on three case studies, representing typical tariff schemes observed in recent CSP projects plus a spot market price scenario. The analysis has been carried out using an annual performance model and a standard dispatch optimization algorithm based on dynamic programming. The dispatch optimizer has been demonstrated to be a key requisite to maximize the annual revenues depending on the price scenario, harvesting the maximum potential out of the CSP plant. Forecasting uncertainty affects the revenue enhancement outcome of a dispatch optimizer depending on the error level and the price function. Results show that forecasting accuracy of direct solar irradiance (DNI) is important to make best use of an optimized dispatch but also that a higher number of calculation updates can partially compensate this uncertainty. Improvement in revenues can be significant depending on the price profile and the optimal operation strategy. Pathways to achieve better performance are presented by having more updates both by repeatedly generating new optimized trajectories but also more often updating weather forecasts. This study shows the importance of working on DNI weather forecasting for revenue enhancement as well as selecting weather services that can provide multiple updates a day and probabilistic forecast information.
NASA Astrophysics Data System (ADS)
Shaltout, Abdallah A.; Hassan, Salwa K.; Karydas, Andreas G.; Zaki, Z. I.; Mostafa, Nasser Y.; Kregsamer, Peter; Wobrauschek, Peter; Streli, Christina
2018-07-01
Fine aerosol particles with aerodynamic diameter equal or <2.5 μm (PM2.5) have been collected from industrial and residential areas of Greater Cairo, Egypt during two different seasons namely; autumn 2014 and winter 2014/2015. Energy dispersive X-ray fluorescence (EDXRF) analysis utilizing polarization geometry and three different secondary targets (CaF2, Ge, and Mo) was employed for the quantitative analysis of eighteen (18) elements in PM2.5 samples. Light elements like Na and Mg was possible to be quantified, whereas detection limits in the range of few ng m-3 were attained for the most of the detected elements. Although, the average mass concentrations of the PM2.5 collected from the residential area (27 ± 7 μg m-3) is close to the annual mean limit value, a significant number of the collected samples (33%) presented higher average mass concentrations. For the industrial location, the average mass concentration is equal to 55 ± 19 μg m-3, exceeded twofold the annual mean limit value of the European Commission. Remarkably high elemental concentrations were determined for the most of the detected elements from the industrial area samples, clearly indicating the significant influence of anthropogenic activities. The present optimized EDXRF analysis offered significantly improved analytical range and limits of detection with respect to previous similar studies, thus enhancing our knowledge and understanding on the contribution of different pollution sources.
Game Theory and Risk-Based Levee System Design
NASA Astrophysics Data System (ADS)
Hui, R.; Lund, J. R.; Madani, K.
2014-12-01
Risk-based analysis has been developed for optimal levee design for economic efficiency. Along many rivers, two levees on opposite riverbanks act as a simple levee system. Being rational and self-interested, land owners on each river bank would tend to independently optimize their levees with risk-based analysis, resulting in a Pareto-inefficient levee system design from the social planner's perspective. Game theory is applied in this study to analyze decision making process in a simple levee system in which the land owners on each river bank develop their design strategies using risk-based economic optimization. For each land owner, the annual expected total cost includes expected annual damage cost and annualized construction cost. The non-cooperative Nash equilibrium is identified and compared to the social planner's optimal distribution of flood risk and damage cost throughout the system which results in the minimum total flood cost for the system. The social planner's optimal solution is not feasible without appropriate level of compensation for the transferred flood risk to guarantee and improve conditions for all parties. Therefore, cooperative game theory is then employed to develop an economically optimal design that can be implemented in practice. By examining the game in the reversible and irreversible decision making modes, the cost of decision making myopia is calculated to underline the significance of considering the externalities and evolution path of dynamic water resource problems for optimal decision making.
A hybrid multi-objective evolutionary algorithm for wind-turbine blade optimization
NASA Astrophysics Data System (ADS)
Sessarego, M.; Dixon, K. R.; Rival, D. E.; Wood, D. H.
2015-08-01
A concurrent-hybrid non-dominated sorting genetic algorithm (hybrid NSGA-II) has been developed and applied to the simultaneous optimization of the annual energy production, flapwise root-bending moment and mass of the NREL 5 MW wind-turbine blade. By hybridizing a multi-objective evolutionary algorithm (MOEA) with gradient-based local search, it is believed that the optimal set of blade designs could be achieved in lower computational cost than for a conventional MOEA. To measure the convergence between the hybrid and non-hybrid NSGA-II on a wind-turbine blade optimization problem, a computationally intensive case was performed using the non-hybrid NSGA-II. From this particular case, a three-dimensional surface representing the optimal trade-off between the annual energy production, flapwise root-bending moment and blade mass was achieved. The inclusion of local gradients in the blade optimization, however, shows no improvement in the convergence for this three-objective problem.
Shieh, Yiwey; Eklund, Martin; Madlensky, Lisa; Sawyer, Sarah D; Thompson, Carlie K; Stover Fiscalini, Allison; Ziv, Elad; Van't Veer, Laura J; Esserman, Laura J; Tice, Jeffrey A
2017-01-01
Ongoing controversy over the optimal approach to breast cancer screening has led to discordant professional society recommendations, particularly in women age 40 to 49 years. One potential solution is risk-based screening, where decisions around the starting age, stopping age, frequency, and modality of screening are based on individual risk to maximize the early detection of aggressive cancers and minimize the harms of screening through optimal resource utilization. We present a novel approach to risk-based screening that integrates clinical risk factors, breast density, a polygenic risk score representing the cumulative effects of genetic variants, and sequencing for moderate- and high-penetrance germline mutations. We demonstrate how thresholds of absolute risk estimates generated by our prediction tools can be used to stratify women into different screening strategies (biennial mammography, annual mammography, annual mammography with adjunctive magnetic resonance imaging, defer screening at this time) while informing the starting age of screening for women age 40 to 49 years. Our risk thresholds and corresponding screening strategies are based on current evidence but need to be tested in clinical trials. The Women Informed to Screen Depending On Measures of risk (WISDOM) Study, a pragmatic, preference-tolerant randomized controlled trial of annual vs personalized screening, will study our proposed approach. WISDOM will evaluate the efficacy, safety, and acceptability of risk-based screening beginning in the fall of 2016. The adaptive design of this trial allows continued refinement of our risk thresholds as the trial progresses, and we discuss areas where we anticipate emerging evidence will impact our approach. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Detection theory for accurate and non-invasive skin cancer diagnosis using dynamic thermal imaging
Godoy, Sebastián E.; Hayat, Majeed M.; Ramirez, David A.; Myers, Stephen A.; Padilla, R. Steven; Krishna, Sanjay
2017-01-01
Skin cancer is the most common cancer in the United States with over 3.5M annual cases. Presently, visual inspection by a dermatologist has good sensitivity (> 90%) but poor specificity (< 10%), especially for melanoma, which leads to a high number of unnecessary biopsies. Here we use dynamic thermal imaging (DTI) to demonstrate a rapid, accurate and non-invasive imaging system for detection of skin cancer. In DTI, the lesion is cooled down and the thermal recovery is recorded using infrared imaging. The thermal recovery curves of the suspected lesions are then utilized in the context of continuous-time detection theory in order to define an optimal statistical decision rule such that the sensitivity of the algorithm is guaranteed to be at a maximum for every prescribed false-alarm probability. The proposed methodology was tested in a pilot study including 140 human subjects demonstrating a sensitivity in excess of 99% for a prescribed specificity in excess of 99% for detection of skin cancer. To the best of our knowledge, this is the highest reported accuracy for any non-invasive skin cancer diagnosis method. PMID:28736673
40 CFR 91.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... periodic optimization of detector response. Prior to introduction into service and at least annually... nitrogen. (2) One of the following procedures is required for FID or HFID optimization: (i) The procedure outlined in Society of Automotive Engineers (SAE) paper No. 770141, “Optimization of Flame Ionization...
Test and treat DC: forecasting the impact of a comprehensive HIV strategy in Washington DC.
Walensky, Rochelle P; Paltiel, A David; Losina, Elena; Morris, Bethany L; Scott, Callie A; Rhode, Erin R; Seage, George R; Freedberg, Kenneth A
2010-08-15
The United States and international agencies have signaled their commitment to containing the human immunodeficiency virus (HIV) epidemic via early case identification and linkage to antiretroviral therapy (ART) immediately at diagnosis. We forecast outcomes of this approach if implemented in Washington DC. Using a mathematical model of HIV case detection and treatment, we evaluated combinations of HIV screening and ART initiation strategies. We define current practice as no regular screening program and ART at CD4 counts < or = 350 cells/microL, and we define test and treat as annual screening and administration of ART at diagnosis. Outcomes include life expectancy of HIV-infected persons and changes in the population time with transmissible HIV RNA levels. Data, largely from Washington DC, include undiagnosed HIV prevalence of 0.6%, annual incidence of 0.13%, 31% rate of test offer, 60% rate of acceptance, and 50% linkage to care. Input parameters, including optimized ART efficacy, are varied in sensitivity analyses. Projected life expectancies, from an initial mean age of 41 years, are 23.9, 25.0, and 25.6 years for current practice, test and treat, and test and treat with optimized ART, respectively. Compared with current practice, test and treat leads to a 14.7% reduction in time spent with transmissible HIV RNA level in the next 5 years; test and treat with optimized ART results in a 27.3% reduction. An expanded HIV test and treat program in Washington DC will increase life expectancy of HIV-infected patients but will have a modest impact on HIV transmission over the next 5 years and is unlikely to halt the HIV epidemic.
2016 Consequence Management Advisory Division's (CMAD) Annual Report
CMAD annual report for 2016 which covers activities such as radiation task force leaders annual training, national criminal enforcement response team annual training, field technology demonstrations, and a new method to detect perfluorinated compounds.
Optimal allocation of HIV prevention funds for state health departments.
Yaylali, Emine; Farnham, Paul G; Cohen, Stacy; Purcell, David W; Hauck, Heather; Sansom, Stephanie L
2018-01-01
To estimate the optimal allocation of Centers for Disease Control and Prevention (CDC) HIV prevention funds for health departments in 52 jurisdictions, incorporating Health Resources and Services Administration (HRSA) Ryan White HIV/AIDS Program funds, to improve outcomes along the HIV care continuum and prevent infections. Using surveillance data from 2010 to 2012 and budgetary data from 2012, we divided the 52 health departments into 5 groups varying by number of persons living with diagnosed HIV (PLWDH), median annual CDC HIV prevention budget, and median annual HRSA expenditures supporting linkage to care, retention in care, and adherence to antiretroviral therapy. Using an optimization and a Bernoulli process model, we solved for the optimal CDC prevention budget allocation for each health department group. The optimal allocation distributed the funds across prevention interventions and populations at risk for HIV to prevent the greatest number of new HIV cases annually. Both the HIV prevention interventions funded by the optimal allocation of CDC HIV prevention funds and the proportions of the budget allocated were similar across health department groups, particularly those representing the large majority of PLWDH. Consistently funded interventions included testing, partner services and linkage to care and interventions for men who have sex with men (MSM). Sensitivity analyses showed that the optimal allocation shifted when there were differences in transmission category proportions and progress along the HIV care continuum. The robustness of the results suggests that most health departments can use these analyses to guide the investment of CDC HIV prevention funds into strategies to prevent the most new cases of HIV.
Optimal allocation of HIV prevention funds for state health departments
Farnham, Paul G.; Cohen, Stacy; Purcell, David W.; Hauck, Heather; Sansom, Stephanie L.
2018-01-01
Objective To estimate the optimal allocation of Centers for Disease Control and Prevention (CDC) HIV prevention funds for health departments in 52 jurisdictions, incorporating Health Resources and Services Administration (HRSA) Ryan White HIV/AIDS Program funds, to improve outcomes along the HIV care continuum and prevent infections. Methods Using surveillance data from 2010 to 2012 and budgetary data from 2012, we divided the 52 health departments into 5 groups varying by number of persons living with diagnosed HIV (PLWDH), median annual CDC HIV prevention budget, and median annual HRSA expenditures supporting linkage to care, retention in care, and adherence to antiretroviral therapy. Using an optimization and a Bernoulli process model, we solved for the optimal CDC prevention budget allocation for each health department group. The optimal allocation distributed the funds across prevention interventions and populations at risk for HIV to prevent the greatest number of new HIV cases annually. Results Both the HIV prevention interventions funded by the optimal allocation of CDC HIV prevention funds and the proportions of the budget allocated were similar across health department groups, particularly those representing the large majority of PLWDH. Consistently funded interventions included testing, partner services and linkage to care and interventions for men who have sex with men (MSM). Sensitivity analyses showed that the optimal allocation shifted when there were differences in transmission category proportions and progress along the HIV care continuum. Conclusion The robustness of the results suggests that most health departments can use these analyses to guide the investment of CDC HIV prevention funds into strategies to prevent the most new cases of HIV. PMID:29768489
40 CFR 90.316 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Initial and periodic optimization of detector response. Prior to initial use and at least annually... nitrogen. (2) Use of one of the following procedures is required for FID or HFID optimization: (i) The procedure outlined in Society of Automotive Engineers (SAE) paper No. 770141, “Optimization of a Flame...
Cost-effectiveness of early detection of breast cancer in Catalonia (Spain)
2011-01-01
Background Breast cancer (BC) causes more deaths than any other cancer among women in Catalonia. Early detection has contributed to the observed decline in BC mortality. However, there is debate on the optimal screening strategy. We performed an economic evaluation of 20 screening strategies taking into account the cost over time of screening and subsequent medical costs, including diagnostic confirmation, initial treatment, follow-up and advanced care. Methods We used a probabilistic model to estimate the effect and costs over time of each scenario. The effect was measured as years of life (YL), quality-adjusted life years (QALY), and lives extended (LE). Costs of screening and treatment were obtained from the Early Detection Program and hospital databases of the IMAS-Hospital del Mar in Barcelona. The incremental cost-effectiveness ratio (ICER) was used to compare the relative costs and outcomes of different scenarios. Results Strategies that start at ages 40 or 45 and end at 69 predominate when the effect is measured as YL or QALYs. Biennial strategies 50-69, 45-69 or annual 45-69, 40-69 and 40-74 were selected as cost-effective for both effect measures (YL or QALYs). The ICER increases considerably when moving from biennial to annual scenarios. Moving from no screening to biennial 50-69 years represented an ICER of 4,469€ per QALY. Conclusions A reduced number of screening strategies have been selected for consideration by researchers, decision makers and policy planners. Mathematical models are useful to assess the impact and costs of BC screening in a specific geographical area. PMID:21605383
Screening for prostate cancer: estimating the magnitude of overdetection
McGregor, M; Hanley, J A; Boivin, J F; McLean, R G
1998-01-01
BACKGROUND: No randomized controlled trial of prostate cancer screening has been reported and none is likely to be completed in the near future. In the absence of direct evidence, the decision to screen must therefore be based on estimates of benefits and risks. The main risk of screening is overdetection--the detection of cancer that, if left untreated, would not cause death. In this study the authors estimate the level of overdetection that might result from annual screening of men aged 50-70. METHODS: The annual rate of lethal screen-detectable cancer (detectable cancer that would prove fatal before age 85 if left untreated) was calculated from the observed prostate cancer mortality rate in Quebec; the annual rate of all cases of screen-detectable prostate cancer was calculated from 2 recent screening studies. RESULTS: The annual rate of lethal screen-detectable prostate cancer was estimated to be 1.3 per 1000 men. The annual rate of all cases of screen-detectable prostate cancer was estimated to be 8.0 per 1000 men. The estimated case-fatality rate among men up to 85 years of age was 16% (1.3/8.0) (sensitivity analysis 13% to 22%). INTERPRETATION: Of every 100 men with screen-detected prostate cancer, only 16 on average (13 to 22) could have their lives extended by surgery, since the prostate cancer would not cause death before age 85 in the remaining 84 (78 to 87). PMID:9861205
You, Jai Kyung; Song, Mi Kyung; Kim, Min Jung; Kim, Eun-Kyung; Moon, Hee Jung; Youk, Ji Hyun; Yoon, Jung Hyun; Park, Vivian Youngjean; Park, Seho; Kim, Seung Il; Park, Byeong-Woo
2018-07-01
The aim of the work described here was to evaluate whether surveillance with biannual ultrasound (US) plus annual mammography (biannual group) for women with a history of breast cancer surgery results in earlier detection or in the detection of smaller second cancers than annual US plus mammography (annual group). Additionally, we compared the prevalence of distant metastases or palpable second cancers between the biannual and annual groups. The institutional review board of our institution approved this retrospective study, and patient consent was waived. Between January 2011 and December 2012, we retrospectively reviewed the clinical and imaging follow-up of 3023 patients with mammographic and US surveillance after breast cancer surgery to assess second cancers detected by local surveillance (locoregional recurrence, contralateral breast cancer or distant metastasis). The biannual and annual groups were divided with respect to the mean surveillance interval and compared with respect to clinicopathologic findings. Multivariable logistic regression with propensity score methods was used to examine the effect of the type of surveillance on outcomes. As for the size of the second cancer, no difference was seen between the biannual and annual groups (12.8 ± 6.6 mm vs. 14.1 ± 7.1 mm, p = 0.461); neither was there a significant difference between the groups in the presence of symptoms at the time of diagnosis of the second cancer (17.0% [8/47] vs. 10% [2/20], p = 0.711). Regardless of detection by local surveillance, the prevalence of distant metastases did not differ between the two groups (1.1% [27/2370] vs. 1.0% [7/653], p = 0.88) on univariate or multivariate analysis. The results of our retrospective study indicate that second cancers detected by biannual US surveillance in patients with a history of breast cancer surgery are not smaller and do not occur earlier than those detected by annual US surveillance. However, a randomized controlled study is required to verify these results before they can be generalized to clinical practice. Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
Risk-based planning analysis for a single levee
NASA Astrophysics Data System (ADS)
Hui, Rui; Jachens, Elizabeth; Lund, Jay
2016-04-01
Traditional risk-based analysis for levee planning focuses primarily on overtopping failure. Although many levees fail before overtopping, few planning studies explicitly include intermediate geotechnical failures in flood risk analysis. This study develops a risk-based model for two simplified levee failure modes: overtopping failure and overall intermediate geotechnical failure from through-seepage, determined by the levee cross section represented by levee height and crown width. Overtopping failure is based only on water level and levee height, while through-seepage failure depends on many geotechnical factors as well, mathematically represented here as a function of levee crown width using levee fragility curves developed from professional judgment or analysis. These levee planning decisions are optimized to minimize the annual expected total cost, which sums expected (residual) annual flood damage and annualized construction costs. Applicability of this optimization approach to planning new levees or upgrading existing levees is demonstrated preliminarily for a levee on a small river protecting agricultural land, and a major levee on a large river protecting a more valuable urban area. Optimized results show higher likelihood of intermediate geotechnical failure than overtopping failure. The effects of uncertainty in levee fragility curves, economic damage potential, construction costs, and hydrology (changing climate) are explored. Optimal levee crown width is more sensitive to these uncertainties than height, while the derived general principles and guidelines for risk-based optimal levee planning remain the same.
Todd Trench, Elaine C.
2004-01-01
A time-series analysis approach developed by the U.S. Geological Survey was used to analyze trends in total phosphorus and evaluate optimal sampling designs for future trend detection, using long-term data for two water-quality monitoring stations on the Quinebaug River in eastern Connecticut. Trend-analysis results for selected periods of record during 1971?2001 indicate that concentrations of total phosphorus in the Quinebaug River have varied over time, but have decreased significantly since the 1970s and 1980s. Total phosphorus concentrations at both stations increased in the late 1990s and early 2000s, but were still substantially lower than historical levels. Drainage areas for both stations are primarily forested, but water quality at both stations is affected by point discharges from municipal wastewater-treatment facilities. Various designs with sampling frequencies ranging from 4 to 11 samples per year were compared to the trend-detection power of the monthly (12-sample) design to determine the most efficient configuration of months to sample for a given annual sampling frequency. Results from this evaluation indicate that the current (2004) 8-sample schedule for the two Quinebaug stations, with monthly sampling from May to September and bimonthly sampling for the remainder of the year, is not the most efficient 8-sample design for future detection of trends in total phosphorus. Optimal sampling schedules for the two stations differ, but in both cases, trend-detection power generally is greater among 8-sample designs that include monthly sampling in fall and winter. Sampling designs with fewer than 8 samples per year generally provide a low level of probability for detection of trends in total phosphorus. Managers may determine an acceptable level of probability for trend detection within the context of the multiple objectives of the state?s water-quality management program and the scientific understanding of the watersheds in question. Managers may identify a threshold of probability for trend detection that is high enough to justify the agency?s investment in the water-quality sampling program. Results from an analysis of optimal sampling designs can provide an important component of information for the decision-making process in which sampling schedules are periodically reviewed and revised. Results from the study described in this report and previous studies indicate that optimal sampling schedules for trend detection may differ substantially for different stations and constituents. A more comprehensive statewide evaluation of sampling schedules for key stations and constituents could provide useful information for any redesign of the schedule for water-quality monitoring in the Quinebaug River Basin and elsewhere in the state.
Frame, P S; Sawai, R; Bowen, W H; Meyerowitz, C
2000-02-01
The purpose of this article is to compare published evidence supporting procedures to prevent dental caries and periodontal disease, in low-risk patients, with the actual preventive recommendations of practicing dentists. Methods included (1) a survey questionnaire of general dentists practicing in western New York State concerning the preventive procedures they would recommend and at what intervals for low-risk children, young adults, and older adults; and (2) review of the published, English-language literature for evidence supporting preventive dental interventions. The majority of dentists surveyed recommended semiannual visits for visual examination and probing to detect caries (73% to 79%), and scaling and polishing to prevent periodontal disease (83% to 86%) for low-risk patients of all ages. Bite-wing radiographs were recommended for all age groups at annual or semiannual intervals. In-office fluoride applications were recommended for low-risk children at intervals of 6 to 12 months by 73% of dentists but were recommended for low-risk older persons by only 22% of dentists. Application of sealants to prevent pit and fissure caries was recommended for low-risk children by 22% of dentists. Literature review found no studies comparing different frequencies of dental examinations and bite-wing radiographs to determine the optimal screening interval in low-risk patients. Two studies of the effect of scaling and polishing on the prevention of periodontal disease found no benefit from more frequent than annual treatments. Although fluoride is clearly a major reason for the decline in the prevalence of dental caries, there are no studies of the incremental benefit of in-office fluoride treatments for low-risk patients exposed to fluoridated water and using fluoridated toothpaste. Comparative studies using outcome end points are needed to determine the optimal frequency of dental examinations and bite-wing radiographs for the early detection of caries, and of scaling and polishing to prevent periodontal disease in low-risk persons. There is no scientific evidence that dental examinations, including scaling and polishing, at 6 month intervals, as recommended by the dentists surveyed in this study, is superior to annual or less frequent examinations for low-risk populations. There is also no evidence that in-office fluoride applications offer incremental benefit over less costly methods of delivering fluoride for low-risk populations.
Young, Patrick. E.; Womeldorph, Craig M.; Johnson, Eric K.; Maykel, Justin A.; Brucher, Bjorn; Stojadinovic, Alex; Avital, Itzhak; Nissan, Aviram; Steele, Scott R.
2014-01-01
Despite advances in neoadjuvant and adjuvant therapy, attention to proper surgical technique, and improved pathological staging for both the primary and metastatic lesions, almost half of all colorectal cancer patients will develop recurrent disease. More concerning, this includes ~25% of patients with theoretically curable node-negative, non-metastatic Stage I and II disease. Given the annual incidence of colorectal cancer, approximately 150,000 new patients are candidates each year for follow-up surveillance. When combined with the greater population already enrolled in a surveillance protocol, this translates to a tremendous number of patients at risk for recurrence. It is therefore imperative that strategies aim for detection of recurrence as early as possible to allow initiation of treatment that may still result in cure. Yet, controversy exists regarding the optimal surveillance strategy (high-intensity vs. traditional), ideal testing regimen, and overall effectiveness. While benefits may involve earlier detection of recurrence, psychological welfare improvement, and greater overall survival, this must be weighed against the potential disadvantages including more invasive tests, higher rates of reoperation, and increased costs. In this review, we will examine the current options available and challenges surrounding colorectal cancer surveillance and early detection of recurrence. PMID:24790654
Optimization of A(2)O BNR processes using ASM and EAWAG Bio-P models: model performance.
El Shorbagy, Walid E; Radif, Nawras N; Droste, Ronald L
2013-12-01
This paper presents the performance of an optimization model for a biological nutrient removal (BNR) system using the anaerobic-anoxic-oxic (A(2)O) process. The formulated model simulates removal of organics, nitrogen, and phosphorus using a reduced International Water Association (IWA) Activated Sludge Model #3 (ASM3) model and a Swiss Federal Institute for Environmental Science and Technology (EAWAG) Bio-P module. Optimal sizing is attained considering capital and operational costs. Process performance is evaluated against the effect of influent conditions, effluent limits, and selected parameters of various optimal solutions with the following results: an increase of influent temperature from 10 degrees C to 25 degrees C decreases the annual cost by about 8.5%, an increase of influent flow from 500 to 2500 m(3)/h triples the annual cost, the A(2)O BNR system is more sensitive to variations in influent ammonia than phosphorus concentration and the maximum growth rate of autotrophic biomass was the most sensitive kinetic parameter in the optimization model.
Eicher, Véronique; Staerklé, Christian; Clémence, Alain
2014-10-01
Prior research on school dropout has often focused on stable person- and institution-level variables. In this research, we investigate longitudinally perceived stress and optimism as predictors of dropout intentions over a period of four years, and distinguish between stable and temporary predictors of dropout intentions. Findings based on a nationally representative sample of 16-20 year-olds in Switzerland (N = 4312) show that both average levels of stress and optimism as well as annually varying levels of stress and optimism affect dropout intentions. Additionally, results show that optimism buffers the negative impact of annually varying stress (i.e., years with more stress than usual), but not of stable levels of stress (i.e., stress over four years). The implications of the results are discussed according to a dynamic and preventive approach of school dropout. Copyright © 2014 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Maye, Jessica, Ed.; Miyashita, Mizuki, Ed.
This document contains the full texts of six papers that were presented at the Southwest workshop on optimality theory. Papers include the following: "Shuswap Diminutive Reduplication" (Sean Hendricks); "On Multiple Sympathy Candidates in Optimality Theory" (Hidehito Hoshi); "A Perceptually Grounded OT Analysis of…
Monitoring sediment transfer processes on the desert margin
NASA Technical Reports Server (NTRS)
Millington, Andrew C.; Arwyn, R. Jones; Quarmby, Neil; Townshend, John R. G.
1987-01-01
LANDSAT Thematic Mapper and Multispectral Scanner data have been used to construct change detection images for three playas in south-central Tunisia. Change detection images have been used to analyze changes in surface reflectance and absorption between wet and dry season (intra-annual change) and between different years (inter-annual change). Change detection imagery has been used to examine geomorphological changes on the playas. Changes in geomorphological phenomena are interpreted from changes in soil and foliar moisture levels, differences in reflectances between different salt and sediments and the spatial expression of geomorphological features. Intra-annual change phenomena that can be detected from multidate imagery are changes in surface moisture, texture and chemical composition, vegetation cover and the extent of aeolian activity. Inter-annual change phenomena are divisible into those restricted to marginal playa facies (sedimentation from sheetwash and alluvial fans, erosion from surface runoff and cliff retreat) and these are found in central playa facies which are related to the internal redistribution of water, salt and sediment.
Optimal synthesis and design of the number of cycles in the leaching process for surimi production.
Reinheimer, M Agustina; Scenna, Nicolás J; Mussati, Sergio F
2016-12-01
Water consumption required during the leaching stage in the surimi manufacturing process strongly depends on the design and the number and size of stages connected in series for the soluble protein extraction target, and it is considered as the main contributor to the operating costs. Therefore, the optimal synthesis and design of the leaching stage is essential to minimize the total annual cost. In this study, a mathematical optimization model for the optimal design of the leaching operation is presented. Precisely, a detailed Mixed Integer Nonlinear Programming (MINLP) model including operating and geometric constraints was developed based on our previous optimization model (NLP model). Aspects about quality, water consumption and main operating parameters were considered. The minimization of total annual costs, which considered a trade-off between investment and operating costs, led to an optimal solution with lesser number of stages (2 instead of 3 stages) and higher volumes of the leaching tanks comparing with previous results. An analysis was performed in order to investigate how the optimal solution was influenced by the variations of the unitary cost of fresh water, waste treatment and capital investment.
Chen, Jianjun; Frey, H Christopher
2004-12-15
Methods for optimization of process technologies considering the distinction between variability and uncertainty are developed and applied to case studies of NOx control for Integrated Gasification Combined Cycle systems. Existing methods of stochastic optimization (SO) and stochastic programming (SP) are demonstrated. A comparison of SO and SP results provides the value of collecting additional information to reduce uncertainty. For example, an expected annual benefit of 240,000 dollars is estimated if uncertainty can be reduced before a final design is chosen. SO and SP are typically applied to uncertainty. However, when applied to variability, the benefit of dynamic process control is obtained. For example, an annual savings of 1 million dollars could be achieved if the system is adjusted to changes in process conditions. When variability and uncertainty are treated distinctively, a coupled stochastic optimization and programming method and a two-dimensional stochastic programming method are demonstrated via a case study. For the case study, the mean annual benefit of dynamic process control is estimated to be 700,000 dollars, with a 95% confidence range of 500,000 dollars to 940,000 dollars. These methods are expected to be of greatest utility for problems involving a large commitment of resources, for which small differences in designs can produce large cost savings.
Optimizing Robinson Operator with Ant Colony Optimization As a Digital Image Edge Detection Method
NASA Astrophysics Data System (ADS)
Yanti Nasution, Tarida; Zarlis, Muhammad; K. M Nasution, Mahyuddin
2017-12-01
Edge detection serves to identify the boundaries of an object against a background of mutual overlap. One of the classic method for edge detection is operator Robinson. Operator Robinson produces a thin, not assertive and grey line edge. To overcome these deficiencies, the proposed improvements to edge detection method with the approach graph with Ant Colony Optimization algorithm. The repairs may be performed are thicken the edge and connect the edges cut off. Edge detection research aims to do optimization of operator Robinson with Ant Colony Optimization then compare the output and generated the inferred extent of Ant Colony Optimization can improve result of edge detection that has not been optimized and improve the accuracy of the results of Robinson edge detection. The parameters used in performance measurement of edge detection are morphology of the resulting edge line, MSE and PSNR. The result showed that Robinson and Ant Colony Optimization method produces images with a more assertive and thick edge. Ant Colony Optimization method is able to be used as a method for optimizing operator Robinson by improving the image result of Robinson detection average 16.77 % than classic Robinson result.
Worum, F.P.; Carricart-Ganivet, J. P.; Benson, L.; Golicher, D.
2007-01-01
We present a model of annual density banding in skeletons of Montastraea coral species growing under thermal stress associated with an ocean-warming scenario. The model predicts that at sea-surface temperatures (SSTs) <29??C, high-density bands (HDBs) are formed during the warmest months of the year. As temperature rises and oscillates around the optimal calcification temperature, an annual doublet in the HDB (dHDB) occurs that consists of two narrow HDBs. The presence of such dHDBs in skeletons of Montastraea species is a clear indication of thermal stress. When all monthly SSTs exceed the optimal calcification temperature, HDBs form during the coldest, not the warmest, months of the year. In addition, a decline in mean-annual calcification rate also occurs during this period of elevated SST. A comparison of our model results with annual density patterns observed in skeletons of M. faveolata and M. franksi, collected from several localities in the Mexican Caribbean, indicates that elevated SSTs are already resulting in the presence of dHDBs as a first sign of thermal stress, which occurs even without coral bleaching. ?? 2007, by the American Society of Limnology and Oceanography, Inc.
Girotra, Shantanu; Yeghiazaryan, Kristina; Golubnitschaja, Olga
2016-09-01
Breast cancer (BC) prevalence has reached an epidemic scale with half a million deaths annually. Current deficits in BC management include predictive and preventive approaches, optimized screening programs, individualized patient profiling, highly sensitive detection technologies for more precise diagnostics and therapy monitoring, individualized prediction and effective treatment of BC metastatic disease. To advance BC management, paradigm shift from delayed to predictive, preventive and personalized medical services is essential. Corresponding step forwards requires innovative multilevel diagnostics procuring specific panels of validated biomarkers. Here, we discuss current instrumental advancements including genomics, proteomics, epigenetics, miRNA, metabolomics, circulating tumor cells and cancer stem cells with a focus on biomarker discovery and multilevel diagnostic panels. A list of the recommended biomarker candidates is provided.
Cost-effectiveness of canine vaccination to prevent human rabies in rural Tanzania.
Fitzpatrick, Meagan C; Hampson, Katie; Cleaveland, Sarah; Mzimbiri, Imam; Lankester, Felix; Lembo, Tiziana; Meyers, Lauren A; Paltiel, A David; Galvani, Alison P
2014-01-21
The annual mortality rate of human rabies in rural Africa is 3.6 deaths per 100 000 persons. Rabies can be prevented with prompt postexposure prophylaxis, but this is costly and often inaccessible in rural Africa. Because 99% of human exposures occur through rabid dogs, canine vaccination also prevents transmission of rabies to humans. To evaluate the cost-effectiveness of rabies control through annual canine vaccination campaigns in rural sub-Saharan Africa. We model transmission dynamics in dogs and wildlife and assess empirical uncertainty in the biological variables to make probability-based evaluations of cost-effectiveness. Epidemiologic variables from a contact-tracing study and literature and cost data from ongoing vaccination campaigns. Two districts of rural Tanzania: Ngorongoro and Serengeti. 10 years. Health policymaker. Vaccination coverage ranging from 0% to 95% in increments of 5%. Life-years for health outcomes and 2010 U.S. dollars for economic outcomes. Annual canine vaccination campaigns were very cost-effective in both districts compared with no canine vaccination. In Serengeti, annual campaigns with as much as 70% coverage were cost-saving. Across a wide range of variable assumptions and levels of societal willingness to pay for life-years, the optimal vaccination coverage for Serengeti was 70%. In Ngorongoro, although optimal coverage depended on willingness to pay, vaccination campaigns were always cost-effective and lifesaving and therefore preferred. Canine vaccination was very cost-effective in both districts, but there was greater uncertainty about the optimal coverage in Ngorongoro. Annual canine rabies vaccination campaigns conferred extraordinary value and dramatically reduced the health burden of rabies. National Institutes of Health.
Scavengers on the move: behavioural changes in foraging search patterns during the annual cycle.
López-López, Pascual; Benavent-Corai, José; García-Ripollés, Clara; Urios, Vicente
2013-01-01
Optimal foraging theory predicts that animals will tend to maximize foraging success by optimizing search strategies. However, how organisms detect sparsely distributed food resources remains an open question. When targets are sparse and unpredictably distributed, a Lévy strategy should maximize foraging success. By contrast, when resources are abundant and regularly distributed, simple brownian random movement should be sufficient. Although very different groups of organisms exhibit Lévy motion, the shift from a Lévy to a brownian search strategy has been suggested to depend on internal and external factors such as sex, prey density, or environmental context. However, animal response at the individual level has received little attention. We used GPS satellite-telemetry data of Egyptian vultures Neophron percnopterus to examine movement patterns at the individual level during consecutive years, with particular interest in the variations in foraging search patterns during the different periods of the annual cycle (i.e. breeding vs. non-breeding). Our results show that vultures followed a brownian search strategy in their wintering sojourn in Africa, whereas they exhibited a more complex foraging search pattern at breeding grounds in Europe, including Lévy motion. Interestingly, our results showed that individuals shifted between search strategies within the same period of the annual cycle in successive years. Results could be primarily explained by the different environmental conditions in which foraging activities occur. However, the high degree of behavioural flexibility exhibited during the breeding period in contrast to the non-breeding period is challenging, suggesting that not only environmental conditions explain individuals' behaviour but also individuals' cognitive abilities (e.g., memory effects) could play an important role. Our results support the growing awareness about the role of behavioural flexibility at the individual level, adding new empirical evidence about how animals in general, and particularly scavengers, solve the problem of efficiently finding food resources.
Constrained growth flips the direction of optimal phenological responses among annual plants.
Lindh, Magnus; Johansson, Jacob; Bolmgren, Kjell; Lundström, Niklas L P; Brännström, Åke; Jonzén, Niclas
2016-03-01
Phenological changes among plants due to climate change are well documented, but often hard to interpret. In order to assess the adaptive value of observed changes, we study how annual plants with and without growth constraints should optimize their flowering time when productivity and season length changes. We consider growth constraints that depend on the plant's vegetative mass: self-shading, costs for nonphotosynthetic structural tissue and sibling competition. We derive the optimal flowering time from a dynamic energy allocation model using optimal control theory. We prove that an immediate switch (bang-bang control) from vegetative to reproductive growth is optimal with constrained growth and constant mortality. Increasing mean productivity, while keeping season length constant and growth unconstrained, delayed the optimal flowering time. When growth was constrained and productivity was relatively high, the optimal flowering time advanced instead. When the growth season was extended equally at both ends, the optimal flowering time was advanced under constrained growth and delayed under unconstrained growth. Our results suggests that growth constraints are key factors to consider when interpreting phenological flowering responses. It can help to explain phenological patterns along productivity gradients, and links empirical observations made on calendar scales with life-history theory. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
The microsomal metabolism of phenol (11 degrees C) over an annual reproductive cycle from June to December has been studied using fall spawning adult brook trout (Salvelinus fontinalis). Incubations were optimized for time, cofactor connection, pH, and microsomal protein concentr...
Intravenous zoledronate for osteoporosis: less might be more
Grey, Andrew
2016-01-01
Annual administration of 5 mg intravenous zoledronate is moderately effective in reducing fracture risk in older adults, decreasing the relative risk of clinical fracture by 33%. However, almost 10 years after its approval for use in clinical practice there remain very substantial uncertainties about the optimal treatment regimen, that is, the lowest dose and/or longest dosing interval that is efficacious. Several pieces of clinical research suggest that the current recommendation for annual administration of 5 mg zoledronate might represent overtreatment. Clinical trials to clarify the optimal use of zoledronate for reduction of fracture risk should be undertaken. PMID:27493690
Intravenous zoledronate for osteoporosis: less might be more.
Grey, Andrew
2016-08-01
Annual administration of 5 mg intravenous zoledronate is moderately effective in reducing fracture risk in older adults, decreasing the relative risk of clinical fracture by 33%. However, almost 10 years after its approval for use in clinical practice there remain very substantial uncertainties about the optimal treatment regimen, that is, the lowest dose and/or longest dosing interval that is efficacious. Several pieces of clinical research suggest that the current recommendation for annual administration of 5 mg zoledronate might represent overtreatment. Clinical trials to clarify the optimal use of zoledronate for reduction of fracture risk should be undertaken.
Warren B. Cohen; Zhiqiang Yang; Robert Kennedy
2010-01-01
Availability of free, high quality Landsat data portends a new era in remote sensing change detection. Using dense (~annual) Landsat time series (LTS), we can now characterize vegetation change over large areas at an annual time step and at the spatial grain of anthropogenic disturbance. Additionally, we expect more accurate detection of subtle disturbances and...
Adhesion of Mycobacterium smegmatis to Charged Surfaces and Diagnostics Implications
NASA Astrophysics Data System (ADS)
Gorse, Diane; Dhinojwala, Ali; Moore, Francisco
Pulmonary tuberculosis (PTB) causes more than 1 million deaths annually. Smear microscopy is a primary rapid detection tool in areas where 95 % of PTB cases occur. This technique, in which the sputum of a symptomatic patient is stained and examined using a light microscope for Mycobacterium tuberculosis (MTB) shows sensitivity between 20 and 60 %. Insufficient bacterial isolation during sample preparation may be a reason for low sensitivity. We are optimizing a system to capture bacteria on the basis of electrostatic interactions to more thoroughly isolate bacteria from suspension and facilitate more accurate detection. Silica supports coated with positively-charged polyelectrolyte, poly(diallyldimethylammonium chloride), captured approximately 4.1 times more Mycobacterium smegmatis, a model organism for MTB, than was captured on negatively-charged silica substrates. Future experimentation will employ branched polymer systems and seek to justify the use of colloidal stability theories to describe initial capture. Supported by University of Akron, Department of Polymer Science, Department of Biology; LORD Corporation.
Power analysis to detect treatment effects in longitudinal clinical trials for Alzheimer's disease.
Huang, Zhiyue; Muniz-Terrera, Graciela; Tom, Brian D M
2017-09-01
Assessing cognitive and functional changes at the early stage of Alzheimer's disease (AD) and detecting treatment effects in clinical trials for early AD are challenging. Under the assumption that transformed versions of the Mini-Mental State Examination, the Clinical Dementia Rating Scale-Sum of Boxes, and the Alzheimer's Disease Assessment Scale-Cognitive Subscale tests'/components' scores are from a multivariate linear mixed-effects model, we calculated the sample sizes required to detect treatment effects on the annual rates of change in these three components in clinical trials for participants with mild cognitive impairment. Our results suggest that a large number of participants would be required to detect a clinically meaningful treatment effect in a population with preclinical or prodromal Alzheimer's disease. We found that the transformed Mini-Mental State Examination is more sensitive for detecting treatment effects in early AD than the transformed Clinical Dementia Rating Scale-Sum of Boxes and Alzheimer's Disease Assessment Scale-Cognitive Subscale. The use of optimal weights to construct powerful test statistics or sensitive composite scores/endpoints can reduce the required sample sizes needed for clinical trials. Consideration of the multivariate/joint distribution of components' scores rather than the distribution of a single composite score when designing clinical trials can lead to an increase in power and reduced sample sizes for detecting treatment effects in clinical trials for early AD.
Hong, Haoyuan; Tsangaratos, Paraskevas; Ilia, Ioanna; Liu, Junzhi; Zhu, A-Xing; Xu, Chong
2018-07-15
The main objective of the present study was to utilize Genetic Algorithms (GA) in order to obtain the optimal combination of forest fire related variables and apply data mining methods for constructing a forest fire susceptibility map. In the proposed approach, a Random Forest (RF) and a Support Vector Machine (SVM) was used to produce a forest fire susceptibility map for the Dayu County which is located in southwest of Jiangxi Province, China. For this purpose, historic forest fires and thirteen forest fire related variables were analyzed, namely: elevation, slope angle, aspect, curvature, land use, soil cover, heat load index, normalized difference vegetation index, mean annual temperature, mean annual wind speed, mean annual rainfall, distance to river network and distance to road network. The Natural Break and the Certainty Factor method were used to classify and weight the thirteen variables, while a multicollinearity analysis was performed to determine the correlation among the variables and decide about their usability. The optimal set of variables, determined by the GA limited the number of variables into eight excluding from the analysis, aspect, land use, heat load index, distance to river network and mean annual rainfall. The performance of the forest fire models was evaluated by using the area under the Receiver Operating Characteristic curve (ROC-AUC) based on the validation dataset. Overall, the RF models gave higher AUC values. Also the results showed that the proposed optimized models outperform the original models. Specifically, the optimized RF model gave the best results (0.8495), followed by the original RF (0.8169), while the optimized SVM gave lower values (0.7456) than the RF, however higher than the original SVM (0.7148) model. The study highlights the significance of feature selection techniques in forest fire susceptibility, whereas data mining methods could be considered as a valid approach for forest fire susceptibility modeling. Copyright © 2018 Elsevier B.V. All rights reserved.
Optimization of conventional water treatment plant using dynamic programming.
Mostafa, Khezri Seyed; Bahareh, Ghafari; Elahe, Dadvar; Pegah, Dadras
2015-12-01
In this research, the mathematical models, indicating the capability of various units, such as rapid mixing, coagulation and flocculation, sedimentation, and the rapid sand filtration are used. Moreover, cost functions were used for the formulation of conventional water and wastewater treatment plant by applying Clark's formula (Clark, 1982). Also, by applying dynamic programming algorithm, it is easy to design a conventional treatment system with minimal cost. The application of the model for a case reduced the annual cost. This reduction was approximately in the range of 4.5-9.5% considering variable limitations. Sensitivity analysis and prediction of system's feedbacks were performed for different alterations in proportion from parameters optimized amounts. The results indicated (1) that the objective function is more sensitive to design flow rate (Q), (2) the variations in the alum dosage (A), and (3) the sand filter head loss (H). Increasing the inflow by 20%, the total annual cost would increase to about 12.6%, while 20% reduction in inflow leads to 15.2% decrease in the total annual cost. Similarly, 20% increase in alum dosage causes 7.1% increase in the total annual cost, while 20% decrease results in 7.9% decrease in the total annual cost. Furthermore, the pressure decrease causes 2.95 and 3.39% increase and decrease in total annual cost of treatment plants. © The Author(s) 2013.
NASA Astrophysics Data System (ADS)
Li, You-Rong; Du, Mei-Tang; Wang, Jian-Ning
2012-12-01
This paper focuses on the research of an evaporator with a binary mixture of organic working fluids in the organic Rankine cycle. Exergoeconomic analysis and performance optimization were performed based on the first and second laws of thermodynamics, and the exergoeconomic theory. The annual total cost per unit heat transfer rate was introduced as the objective function. In this model, the exergy loss cost caused by the heat transfer irreversibility and the capital cost were taken into account; however, the exergy loss due to the frictional pressure drops, heat dissipation to surroundings, and the flow imbalance were neglected. The variation laws of the annual total cost with respect to the number of transfer units and the temperature ratios were presented. Optimal design parameters that minimize the objective function had been obtained, and the effects of some important dimensionless parameters on the optimal performances had also been discussed for three types of evaporator flow arrangements. In addition, optimal design parameters of evaporators were compared with those of condensers.
Immunomodulation to Optimize Vascularized Composite Allograft Integration in Limb Loss Therapy
2014-10-01
AD_________________ Award Number: W81XWH-12-2-0058 TITLE: Immunomodulation to Optimize Vascularized...ADDRESS. 1. REPORT DATE October 2014 2. REPORT TYPE Annual 3. DATES COVERED 30 Sep 2013 - 29 Sep 2014 4. TITLE AND SUBTITLE Immunomodulation to...efficacious immunomodulation regimen based on belatacept to optimize the integration of limb transplantation after limb loss. Regulatory Review
De Lara, Michel
2006-05-01
In their 1990 paper Optimal reproductive efforts and the timing of reproduction of annual plants in randomly varying environments, Amir and Cohen considered stochastic environments consisting of i.i.d. sequences in an optimal allocation discrete-time model. We suppose here that the sequence of environmental factors is more generally described by a Markov chain. Moreover, we discuss the connection between the time interval of the discrete-time dynamic model and the ability of the plant to rebuild completely its vegetative body (from reserves). We formulate a stochastic optimization problem covering the so-called linear and logarithmic fitness (corresponding to variation within and between years), which yields optimal strategies. For "linear maximizers'', we analyse how optimal strategies depend upon the environmental variability type: constant, random stationary, random i.i.d., random monotonous. We provide general patterns in terms of targets and thresholds, including both determinate and indeterminate growth. We also provide a partial result on the comparison between ;"linear maximizers'' and "log maximizers''. Numerical simulations are provided, allowing to give a hint at the effect of different mathematical assumptions.
Kowada, Akiko
2011-12-01
Currently, an annual chest X-ray examination (CXR) for detection of active tuberculosis (TB) in employees aged ≥40 years is recommended in the guidelines of the Japan Industrial Safety and Health Law. Interferon-γ release assays are new alternatives to the tuberculin skin test for detecting Mycobacterium tuberculosis infection, with higher specificity than the tuberculin skin test and without cross-reactivity with the Bacille Calmette-Guérin vaccine. This study aimed to assess the cost-effectiveness of employee TB screening using QuantiFERON-TB Gold In-Tube (QFT) versus CXR. Markov models were constructed. The target population was a hypothetical cohort of immunocompetent 40-year-old individuals, using a societal perspective and a lifetime horizon. All costs and clinical benefits were discounted at a fixed annual rate of 3%. In a base-case analysis, the QFT strategy was the most cost-effective ($US 262.84; 22.87049 quality-adjusted life-years [QALYs]) compared with no screening ($448.38; 22.85452 QALYs) and CXR ($543.50; 22.85453 QALYs) [year 2009 values]. The QFT strategy is currently robust for screening Bacille Calmette-Guérin- vaccinated employees in Japan. There appears to be little role for CXR. These findings may be applicable to other countries in terms of choosing optimal TB screening for employees. Copyright © 2011 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
Effects of anthropogenic activity emerging as intensified extreme precipitation over China
NASA Astrophysics Data System (ADS)
Li, Huixin; Chen, Huopo; Wang, Huijun
2017-07-01
This study aims to provide an assessment of the effects of anthropogenic (ANT) forcings and other external factors on observed increases in extreme precipitation over China from 1961 to 2005. Extreme precipitation is represented by the annual maximum 1 day precipitation (RX1D) and the annual maximum 5 day consecutive precipitation (RX5D), and these variables are investigated using observations and simulations from the Coupled Model Intercomparison Project phase 5. The analyses mainly focus on the probability-based index (PI), which is derived from RX1D and RX5D by fitting generalized extreme value distributions. The results indicate that the simulations that include the ANT forcings provide the best representation of the spatial and temporal characteristics of extreme precipitation over China. We use the optimal fingerprint method to obtain the univariate and multivariate fingerprints of the responses to external forcings. The results show that only the ANT forcings are detectable at a 90% confidence level, both individually and when natural forcings are considered simultaneously. The impact of the forcing associated with greenhouse gases (GHGs) is also detectable in RX1D, but its effects cannot be separated from those of combinations of forcings that exclude the GHG forcings in the two-signal analyses. Besides, the estimated changes of PI, extreme precipitation, and events with a 20 year return period under nonstationary climate states are potentially attributable to ANT or GHG forcings, and the relationships between extreme precipitation and temperature from ANT forcings show agreement with observations.
Optimizing heliostat positions with local search metaheuristics using a ray tracing optical model
NASA Astrophysics Data System (ADS)
Reinholz, Andreas; Husenbeth, Christof; Schwarzbözl, Peter; Buck, Reiner
2017-06-01
The life cycle costs of solar tower power plants are mainly determined by the investment costs of its construction. Significant parts of these investment costs are used for the heliostat field. Therefore, an optimized placement of the heliostats gaining the maximal annual power production has a direct impact on the life cycle costs revenue ratio. We present a two level local search method implemented in MATLAB utilizing the Monte Carlo raytracing software STRAL [1] for the evaluation of the annual power output for a specific weighted annual time scheme. The algorithm was applied to a solar tower power plant (PS10) with 624 heliostats. Compared to former work of Buck [2], we were able to improve both runtime of the algorithm and quality of the output solutions significantly. Using the same environment for both algorithms, we were able to reach Buck's best solution with a speed up factor of about 20.
Optimal adaptation to extreme rainfalls in current and future climate
NASA Astrophysics Data System (ADS)
Rosbjerg, Dan
2017-01-01
More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases, the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time-span. Immediate as well as delayed adaptation is considered.
Optimal adaptation to extreme rainfalls under climate change
NASA Astrophysics Data System (ADS)
Rosbjerg, Dan
2017-04-01
More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time span. Immediate as well as delayed adaptation is considered.
Technology Assessment in Support of the Presidential Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Weisbin, Charles R.; Lincoln, William; Mrozinski, Joe; Hua, Hook; Merida, Sofia; Shelton, Kacie; Adumitroaie, Virgil; Derleth, Jason; Silberg, Robert
2006-01-01
This document is a viewgraph presentation that contains: (1) pictorial description of lunar context, (2) Definition of base case, (3) Optimization results, (4) Effects of cost uncertainties for base case and different assumed annual budget levels and (5) Effects of temporal optimization.
Stochastic Pseudo-Boolean Optimization
2011-07-31
Right-Hand Side,” 2009 IN- FORMS Annual Meeting, San Diego, CA, October 11-14, 2009. 113 References [1] A.-Ghouila-Houri. Caracterisation des matrices...Optimization, 10:7–21, 2005. [30] P. Camion. Caracterisation des matrices unimodulaires. Cahiers Centre Etudes Rech., 5(4), 1963. [31] P. Camion
Effectiveness of early detection on breast cancer mortality reduction in Catalonia (Spain)
2009-01-01
Background At present, it is complicated to use screening trials to determine the optimal age intervals and periodicities of breast cancer early detection. Mathematical models are an alternative that has been widely used. The aim of this study was to estimate the effect of different breast cancer early detection strategies in Catalonia (Spain), in terms of breast cancer mortality reduction (MR) and years of life gained (YLG), using the stochastic models developed by Lee and Zelen (LZ). Methods We used the LZ model to estimate the cumulative probability of death for a cohort exposed to different screening strategies after T years of follow-up. We also obtained the cumulative probability of death for a cohort with no screening. These probabilities were used to estimate the possible breast cancer MR and YLG by age, period and cohort of birth. The inputs of the model were: incidence of, mortality from and survival after breast cancer, mortality from other causes, distribution of breast cancer stages at diagnosis and sensitivity of mammography. The outputs were relative breast cancer MR and YLG. Results Relative breast cancer MR varied from 20% for biennial exams in the 50 to 69 age interval to 30% for annual exams in the 40 to 74 age interval. When strategies differ in periodicity but not in the age interval of exams, biennial screening achieved almost 80% of the annual screening MR. In contrast to MR, the effect on YLG of extending screening from 69 to 74 years of age was smaller than the effect of extending the screening from 50 to 45 or 40 years. Conclusion In this study we have obtained a measure of the effect of breast cancer screening in terms of mortality and years of life gained. The Lee and Zelen mathematical models have been very useful for assessing the impact of different modalities of early detection on MR and YLG in Catalonia (Spain). PMID:19754959
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
Baseline PSA in a Spanish male population aged 40-49 years anticipates detection of prostate cancer.
Angulo, J C; Viñas, M A; Gimbernat, H; Fata, F Ramón de; Granados, R; Luján, M
2015-12-01
We researched the usefulness of optimizing prostate cancer (PC) screening in our community using baseline PSA readings in men between 40-49 years of age. A retrospective study was performed that analyzed baseline PSA in the fifth decade of life and its ability to predict the development of PC in a population of Madrid (Spain). An ROC curve was created and a cutoff was proposed. We compared the evolution of PSA from baseline in patients with consecutive readings using the Friedman test. We established baseline PSA ranges with different risks of developing cancer and assessed the diagnostic utility of the annual PSA velocity (PSAV) in this population. Some 4,304 men aged 40-49 years underwent opportunistic screening over the course of 17 years, with at least one serum PSA reading (6,001 readings) and a mean follow-up of 57.1±36.8 months. Of these, 768 underwent biopsy of some organ, and 104 underwent prostate biopsy. Fourteen patients (.33%) were diagnosed with prostate cancer. The median baseline PSA was .74 (.01-58.5) ng/mL for patients without PC and 4.21 (.76-47.4) ng/mL for those with PC. The median time from the reading to diagnosis was 26.8 (1.5-143.8) months. The optimal cutoff for detecting PC was 1.9ng/mL (sensitivity, 92.86%; specificity, 92.54%; PPV, 3.9%; NPV, 99.97%), and the area under the curve was 92.8%. In terms of the repeated reading, the evolution of the PSA showed no statistically significant differences between the patients without cancer (p=.56) and those with cancer (P=.64). However, a PSAV value >.3ng/mL/year revealed high specificity for detecting cancer in this population. A baseline PSA level ≥1.9ng/mL in Spanish men aged 40-49 years predicted the development of PC. This value could therefore be of use for opportunistic screening at an early age. An appropriate follow-up adapted to the risk of this population needs to be defined, but an annual PSAV ≥.3ng/mL/year appears of use for reaching an early diagnosis. Copyright © 2015 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.
A comparison of GaAs and Si hybrid solar power systems
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.; Roberts, A. S., Jr.
1977-01-01
Five different hybrid solar power systems using silicon solar cells to produce thermal and electric power are modeled and compared with a hybrid system using a GaAs cell. Among the indices determined are capital cost per unit electric power plus mechanical power, annual cost per unit electric energy, and annual cost per unit electric plus mechanical work. Current costs are taken to be $35,000/sq m for GaAs cells with an efficiency of 15% and $1000/sq m for Si cells with an efficiency of 10%. It is shown that hybrid systems can be competitive with existing methods of practical energy conversion. Limiting values for annual costs of Si and GaAs cells are calculated to be 10.3 cents/kWh and 6.8 cents/kWh, respectively. Results for both systems indicate that for a given flow rate there is an optimal operating condition for minimum cost photovoltaic output. For Si cell costs of $50/sq m optimal performance can be achieved at concentrations of about 10; for GaAs cells costing 1000/sq m, optimal performance can be obtained at concentrations of around 100. High concentration hybrid systems offer a distinct cost advantage over flat systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Cameron, Ian M.; Dumais, Johannes R.
2015-10-01
Abstract Batan Teknologi (BaTek) operates an isotope production facility in Serpong, Indonesia that supplies 99mTc for use in medical procedures. Atmospheric releases of Xe-133 in the production process at BaTek are known to influence the measurements taken at the closest stations of the International Monitoring System (IMS). The purpose of the IMS is to detect evidence of nuclear explosions, including atmospheric releases of radionuclides. The xenon isotopes released from BaTek are the same as those produced in a nuclear explosion, but the isotopic ratios are different. Knowledge of the magnitude of releases from the isotope production facility helps inform analystsmore » trying to decide whether a specific measurement result came from a nuclear explosion. A stack monitor deployed at BaTek in 2013 measured releases to the atmosphere for several isotopes. The facility operates on a weekly cycle, and the stack data for June 15-21, 2013 show a release of 1.84E13 Bq of Xe-133. Concentrations of Xe-133 in the air are available at the same time from a xenon sampler located 14 km from BaTek. An optimization process using atmospheric transport modeling and the sampler air concentrations produced a release estimate of 1.88E13 Bq. The same optimization process yielded a release estimate of 1.70E13 Bq for a different week in 2012. The stack release value and the two optimized estimates are all within 10 percent of each other. Weekly release estimates of 1.8E13 Bq and a 40 percent facility operation rate yields a rough annual release estimate of 3.7E13 Bq of Xe-133. This value is consistent with previously published estimates of annual releases for this facility, which are based on measurements at three IMS stations. These multiple lines of evidence cross-validate the stack release estimates and the release estimates from atmospheric samplers.« less
ERIC Educational Resources Information Center
Society for the Advancement of Gifted Education, Calgary (Alberta).
This conference proceedings focuses on structuring classrooms to optimize learning among Alberta (Canada) gifted students. The first paper, "Optimizing Parent Potential" (Trudy A. Harrold), describes a model and a process for helping parents acquire knowledge, organize their thinking, and act from a realistic base when dealing with their gifted…
Estimating recharge rates with analytic element models and parameter estimation
Dripps, W.R.; Hunt, R.J.; Anderson, M.P.
2006-01-01
Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).
Wziątek-Nowak, Weronika; Gierczyński, Jakub; Dąbrowiecki, Piotr; Gałązka-Sobotka, Małgorzata; Fal, Andrzej M; Gryglewicz, Jerzy; Badyda, Artur J
2016-01-01
Chronic obstructive pulmonary disease (COPD) is currently the third most common cause of death worldwide and the total number of people affected reaches over 200 million. It is estimated that approximately 50 % of persons having COPD are not aware of it. In the EU, it is estimated that the total annual costs of COPD exceed €140 billion, and the expected increase in the number of cases and deaths due to COPD would further enhance economic and social costs of the disease. In this article we present the results of cost analysis of health care benefits associated with the treatment of COPD and with the disease-related incapacity for work. The analysis is based on the data of the National Health Fund and the Social Insurance Institutions, public payers of health benefits in Poland. The annual 2012 expenditures incurred for COPD treatment was €40 million, and the benefits associated with incapacity for work reached more than €55 million. The extent of these expenditures indicates that it is necessary to optimize the functioning system, including the allocation of resources for prevention, social awareness, and detection of COPD at early stages when treatment costs are relatively low.
The state of everyday quantitative EEG use in Canada: A national technologist survey.
Ng, Marcus C; Gillis, Kara
2017-07-01
This study sought to determine the state of quantitative EEG (QEEG) use in Canada, as QEEG may provide a partial solution to the issue of escalating EEG demand against insufficient health care resources. A 10-item survey questionnaire was administered to participants at the annual meeting of the Canadian Association of Electroneurophysiology Technologists, which was held in parallel with the annual meeting of the Canadian Neurological Sciences Federation. At least 70% of the Canadian population has QEEG access through academic medical institutions with applicability to adults and children. QEEG was clinically used 50% in real-time and 50% retrospectively in the critical care and epilepsy monitoring units for long-term monitoring and automated seizure detection. QEEG trend use, montage use, and duration were variable. To cope with insufficient health care resources, QEEG is in surprisingly frequent clinical use across Canada. There is no consensus on optimal QEEG trends and montages. The relative ubiquity of QEEG affords an excellent opportunity for research as increasing EEG demand outpaces dwindling health care resources into the foreseeable future. Copyright © 2017 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
Seeing Earth's Orbit in the Stars: Parallax and Aberration
NASA Astrophysics Data System (ADS)
Timberlake, Todd K.
2013-11-01
During the 17th century the idea of an orbiting and rotating Earth became increasingly popular, but opponents of this view continued to point out that the theory had observable consequences that had never, in fact, been observed.1 Why, for instance, had astronomers failed to detect the annual parallax of the stars that must occur if Earth orbits the Sun? To address this problem, astronomers of the 17th and 18th centuries sought to measure the annual parallax of stars using telescopes. None of them succeeded. Annual stellar parallax was not successfully measured until 1838, when Friedrich Bessel detected the parallax of the star 61 Cygni.2 But the early failures to detect annual stellar parallax led to the discovery of a new (and entirely unexpected) phenomenon: the aberration of starlight. This paper recounts the story of the discovery of stellar aberration. It is accompanied by a set of activities and computer simulations that allow students to explore this fascinating historical episode and learn important lessons about the nature of science.3
Nonlinear Programming Models to Optimize Uneven-Aged Shortleaf Pine Management
Benedict J. Schulte; Joseph Buongiorno
2002-01-01
Nonlinear programming models of uneven-aged shortleaf pine (Pinus echinata Mill.) management were developed to identify sustainable management regimes that optimize soil expectation value (SEV) or annual sawtimber yields. The models recognize three species groups (shortleaf pine and other softwoods, soft hardwoods and hard hardwoods) and 13 2-inch...
Ecological optimality in water-limited natural soil-vegetation systems. II - Tests and applications
NASA Technical Reports Server (NTRS)
Eagleson, P. S.; Tellers, T. E.
1982-01-01
The long-term optimal climatic climax soil-vegetation system is defined for several climates according to previous hypotheses in terms of two free parameters, effective porosity and plant water use coefficient. The free parameters are chosen by matching the predicted and observed average annual water yield. The resulting climax soil and vegetation properties are tested by comparison with independent observations of canopy density and average annual surface runoff. The climax properties are shown also to satisfy a previous hypothesis for short-term optimization of canopy density and water use coefficient. Using these hypotheses, a relationship between average evapotranspiration and optimum vegetation canopy density is derived and is compared with additional field observations. An algorithm is suggested by which the climax soil and vegetation properties can be calculated given only the climate parameters and the soil effective porosity. Sensitivity of the climax properties to the effective porosity is explored.
NASA Astrophysics Data System (ADS)
Bando, Shigeru; Watanabe, Hiroki; Asano, Hiroshi; Tsujita, Shinsuke
A methodology was developed to design the number and capacity for each piece of equipment (e.g. gas engines, batteries, thermal storage tanks) in microgrids with combined heat and power systems. We analyzed three types of microgrids; the first one consists of an office building and an apartment, the second one consists of a hospital and an apartment, the third one consists of a hotel, office and retails. In the methodology, annual cost is minimized by considering the partial load efficiency of a gas engine and its scale economy, and the optimal number and capacity of each piece of equipment and the annual operational schedule are determined by using the optimal planning method. Based on calculations using this design methodology, it is found that the optimal number of gas engines is determined by the ratio of bottom to peak of the electricity demand and the ratio of heat to electricity demand. The optimal capacity of a battery required to supply electricity for a limited time during a peak demand period is auxiliary. The thermal storage tank for space cooling and space heating is selected to minimize the use of auxiliary equipment such as a gas absorption chiller.
Trend analysis of long-term temperature time series in the Greater Toronto Area (GTA)
NASA Astrophysics Data System (ADS)
Mohsin, Tanzina; Gough, William A.
2010-08-01
As the majority of the world’s population is living in urban environments, there is growing interest in studying local urban climates. In this paper, for the first time, the long-term trends (31-162 years) of temperature change have been analyzed for the Greater Toronto Area (GTA). Annual and seasonal time series for a number of urban, suburban, and rural weather stations are considered. Non-parametric statistical techniques such as Mann-Kendall test and Theil-Sen slope estimation are used primarily for the assessing of the significance and detection of trends, and the sequential Mann test is used to detect any abrupt climate change. Statistically significant trends for annual mean and minimum temperatures are detected for almost all stations in the GTA. Winter is found to be the most coherent season contributing substantially to the increase in annual minimum temperature. The analyses of the abrupt changes in temperature suggest that the beginning of the increasing trend in Toronto started after the 1920s and then continued to increase to the 1960s. For all stations, there is a significant increase of annual and seasonal (particularly winter) temperatures after the 1980s. In terms of the linkage between urbanization and spatiotemporal thermal patterns, significant linear trends in annual mean and minimum temperature are detected for the period of 1878-1978 for the urban station, Toronto, while for the rural counterparts, the trends are not significant. Also, for all stations in the GTA that are situated in all directions except south of Toronto, substantial temperature change is detected for the periods of 1970-2000 and 1989-2000. It is concluded that the urbanization in the GTA has significantly contributed to the increase of the annual mean temperatures during the past three decades. In addition to urbanization, the influence of local climate, topography, and larger scale warming are incorporated in the analysis of the trends.
Optimal use of land surface temperature data to detect changes in tropical forest cover
NASA Astrophysics Data System (ADS)
van Leeuwen, Thijs T.; Frank, Andrew J.; Jin, Yufang; Smyth, Padhraic; Goulden, Michael L.; van der Werf, Guido R.; Randerson, James T.
2011-06-01
Rapid and accurate assessment of global forest cover change is needed to focus conservation efforts and to better understand how deforestation is contributing to the buildup of atmospheric CO2. Here we examined different ways to use land surface temperature (LST) to detect changes in tropical forest cover. In our analysis we used monthly 0.05° × 0.05° Terra Moderate Resolution Imaging Spectroradiometer (MODIS) observations of LST and Program for the Estimation of Deforestation in the Brazilian Amazon (PRODES) estimates of forest cover change. We also compared MODIS LST observations with an independent estimate of forest cover loss derived from MODIS and Landsat observations. Our study domain of approximately 10° × 10° included the Brazilian state of Mato Grosso. For optimal use of LST data to detect changes in tropical forest cover in our study area, we found that using data sampled during the end of the dry season (˜1-2 months after minimum monthly precipitation) had the greatest predictive skill. During this part of the year, precipitation was low, surface humidity was at a minimum, and the difference between day and night LST was the largest. We used this information to develop a simple temporal sampling algorithm appropriate for use in pantropical deforestation classifiers. Combined with the normalized difference vegetation index, a logistic regression model using day-night LST did moderately well at predicting forest cover change. Annual changes in day-night LST decreased during 2006-2009 relative to 2001-2005 in many regions within the Amazon, providing independent confirmation of lower deforestation levels during the latter part of this decade as reported by PRODES.
NASA Astrophysics Data System (ADS)
Ferreira, Ana C. M.; Teixeira, Senhorinha F. C. F.; Silva, Rui G.; Silva, Ângela M.
2018-04-01
Cogeneration allows the optimal use of the primary energy sources and significant reductions in carbon emissions. Its use has great potential for applications in the residential sector. This study aims to develop a methodology for thermal-economic optimisation of small-scale micro-gas turbine for cogeneration purposes, able to fulfil domestic energy needs with a thermal power out of 125 kW. A constrained non-linear optimisation model was built. The objective function is the maximisation of the annual worth from the combined heat and power, representing the balance between the annual incomes and the expenditures subject to physical and economic constraints. A genetic algorithm coded in the java programming language was developed. An optimal micro-gas turbine able to produce 103.5 kW of electrical power with a positive annual profit (i.e. 11,925 €/year) was disclosed. The investment can be recovered in 4 years and 9 months, which is less than half of system lifetime expectancy.
Gremer, Jennifer R; Kimball, Sarah; Venable, D Lawrence
2016-10-01
In variable environments, organisms must have strategies to ensure fitness as conditions change. For plants, germination can time emergence with favourable conditions for later growth and reproduction (predictive germination), spread the risk of unfavourable conditions (bet hedging) or both (integrated strategies). Here we explored the adaptive value of within- and among-year germination timing for 12 species of Sonoran Desert winter annual plants. We parameterised models with long-term demographic data to predict optimal germination fractions and compared them to observed germination. At both temporal scales we found that bet hedging is beneficial and that predicted optimal strategies corresponded well with observed germination. We also found substantial fitness benefits to varying germination timing, suggesting some degree of predictive germination in nature. However, predictive germination was imperfect, calling for some degree of bet hedging. Together, our results suggest that desert winter annuals have integrated strategies combining both predictive plasticity and bet hedging. © 2016 John Wiley & Sons Ltd/CNRS.
Optimal allocation in annual plants and its implications for drought response
NASA Astrophysics Data System (ADS)
Caldararu, Silvia; Smith, Matthew; Purves, Drew
2015-04-01
The concept of plant optimality refers to the plastic behaviour of plants that results in lifetime and offspring fitness. Optimality concepts have been used in vegetation models for a variety of processes, including stomatal conductance, leaf phenology and biomass allocation. Including optimality in vegetation models has the advantages of creating process based models with a relatively low complexity in terms of parameter numbers but which are capable of reproducing complex plant behaviour. We present a general model of plant growth for annual plants based on the hypothesis that plants allocate biomass to aboveground and belowground vegetative organs in order to maintain an optimal C:N ratio. The model also represents reproductive growth through a second optimality criteria, which states that plants flower when they reach peak nitrogen uptake. We apply this model to wheat and maize crops at 15 locations corresponding to FLUXNET cropland sites. The model parameters are data constrained using a Bayesian fitting algorithm to eddy covariance data, satellite derived vegetation indices, specifically the MODIS fAPAR product and field level crop yield data. We use the model to simulate the plant drought response under the assumption of plant optimality and show that the plants maintain unstressed total biomass levels under drought for a reduction in precipitation of up to 40%. Beyond that level plant response stops being plastic and growth decreases sharply. This behaviour results simply from the optimal allocation criteria as the model includes no explicit drought sensitivity component. Models that use plant optimality concepts are a useful tool for simulation plant response to stress without the addition of artificial thresholds and parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
FY 2013 annual report focuses on the following areas: vehicle modeling and simulation, component and systems evaluations, laboratory and field evaluations, codes and standards, industry projects, and vehicle systems optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penny, Matthew T., E-mail: penny@astronomy.ohio-state.edu
2014-08-01
Extensive simulations of planetary microlensing are necessary both before and after a survey is conducted: before to design and optimize the survey and after to understand its detection efficiency. The major bottleneck in such computations is the computation of light curves. However, for low-mass planets, most of these computations are wasteful, as most light curves do not contain detectable planetary signatures. In this paper, I develop a parameterization of the binary microlens that is conducive to avoiding light curve computations. I empirically find analytic expressions describing the limits of the parameter space that contain the vast majority of low-mass planetmore » detections. Through a large-scale simulation, I measure the (in)completeness of the parameterization and the speed-up it is possible to achieve. For Earth-mass planets in a wide range of orbits, it is possible to speed up simulations by a factor of ∼30-125 (depending on the survey's annual duty-cycle) at the cost of missing ∼1% of detections (which is actually a smaller loss than for the arbitrary parameter limits typically applied in microlensing simulations). The benefits of the parameterization probably outweigh the costs for planets below 100 M{sub ⊕}. For planets at the sensitivity limit of AFTA-WFIRST, simulation speed-ups of a factor ∼1000 or more are possible.« less
[Optimized application of nested PCR method for detection of malaria].
Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C
2017-04-28
Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P < 0.05). Conclusion The optimized PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.
Optimization of PSA screening policies: a comparison of the patient and societal perspectives.
Zhang, Jingyu; Denton, Brian T; Balasubramanian, Hari; Shah, Nilay D; Inman, Brant A
2012-01-01
To estimate the benefit of PSA-based screening for prostate cancer from the patient and societal perspectives. A partially observable Markov decision process model was used to optimize PSA screening decisions. Age-specific prostate cancer incidence rates and the mortality rates from prostate cancer and competing causes were considered. The model trades off the potential benefit of early detection with the cost of screening and loss of patient quality of life due to screening and treatment. PSA testing and biopsy decisions are made based on the patient's probability of having prostate cancer. Probabilities are inferred based on the patient's complete PSA history using Bayesian updating. The results of all PSA tests and biopsies done in Olmsted County, Minnesota, from 1993 to 2005 (11,872 men and 50,589 PSA test results). Patients' perspective: to maximize expected quality-adjusted life years (QALYs); societal perspective: to maximize the expected monetary value based on societal willingness to pay for QALYs and the cost of PSA testing, prostate biopsies, and treatment. From the patient perspective, the optimal policy recommends stopping PSA testing and biopsy at age 76. From the societal perspective, the stopping age is 71. The expected incremental benefit of optimal screening over the traditional guideline of annual PSA screening with threshold 4.0 ng/mL for biopsy is estimated to be 0.165 QALYs per person from the patient perspective and 0.161 QALYs per person from the societal perspective. PSA screening based on traditional guidelines is found to be worse than no screening at all. PSA testing done with traditional guidelines underperforms and therefore underestimates the potential benefit of screening. Optimal screening guidelines differ significantly depending on the perspective of the decision maker.
Optimal ranking regime analysis of U.S. climate variablility. Part II: Precipitation and streamflow
USDA-ARS?s Scientific Manuscript database
In a preceding companion paper the Optimal Ranking Regime (ORR) method was used to identify intra- to multi-decadal (IMD) regimes in U.S. climate division temperature data during 1896-2012. Here, the method is used to test for annual and seasonal precipitation regimes during that same period. In add...
Nonlinear programming models to optimize uneven-aged loblolly pine management
Benedict J. Schulte; Joseph. Buongiorno; Kenneth Skog
1999-01-01
Nonlinear programming models of uneven-aged loblolly pine (Pinus taeda L.) management were developed to identify sustainable management regimes which optimize: 1) soil expectation value (SEV), 2) tree diversity, or 3) annual sawtimber yields. The models use the equations of SouthPro, a site- and density-dependent, multi-species matrix growth and yield model that...
Cancer Detection and Diagnosis Methods - Annual Plan
Early cancer detection is a proven life-saving strategy. Learn about the research opportunities NCI supports, including liquid biopsies and other less-invasive methods, for detecting early cancers and precancerous growths.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gebraad, Pieter; Thomas, Jared J.; Ning, Andrew
This paper presents a wind plant modeling and optimization tool that enables the maximization of wind plant annual energy production (AEP) using yaw-based wake steering control and layout changes. The tool is an extension of a wake engineering model describing the steady-state effects of yaw on wake velocity profiles and power productions of wind turbines in a wind plant. To make predictions of a wind plant's AEP, necessary extensions of the original wake model include coupling it with a detailed rotor model and a control policy for turbine blade pitch and rotor speed. This enables the prediction of power productionmore » with wake effects throughout a range of wind speeds. We use the tool to perform an example optimization study on a wind plant based on the Princess Amalia Wind Park. In this case study, combined optimization of layout and wake steering control increases AEP by 5%. The power gains from wake steering control are highest for region 1.5 inflow wind speeds, and they continue to be present to some extent for the above-rated inflow wind speeds. The results show that layout optimization and wake steering are complementary because significant AEP improvements can be achieved with wake steering in a wind plant layout that is already optimized to reduce wake losses.« less
Michael T. Thompson
2009-01-01
Aerial detection surveys indicate that widespread conifer mortality has been steadily increasing in Colorado, particularly since 2002. The Forest Inventory and Analysis (FIA) annual inventory system began in Colorado in 2002, which coincided with the onset of elevated conifer mortality rates. The current mortality event coupled with collection of 6 years of annual...
Compressed Air System Optimization: Case Study Food Industry in Indonesia
NASA Astrophysics Data System (ADS)
Widayati, Endang; Nuzahar, Hasril
2016-01-01
Compressors and compressed air systems was one of the most important utilities in industries or factories. Approximately 10% of the cost of electricity in the industry was used to produce compressed air. Therefore the potential for energy savings in the compressors and compressed air systems had a big challenge. This field was conducted especially in Indonesia food industry or factory. Compressed air system optimization was a technique approach to determine the optimal conditions for the operation of compressors and compressed air systems that included evaluation of the energy needs, supply adjustment, eliminating or reconfiguring the use and operation of inefficient, changing and complementing some equipment and improving operating efficiencies. This technique gave the significant impact for energy saving and costs. The potential savings based on this study through measurement and optimization e.g. system that lowers the pressure of 7.5 barg to 6.8 barg would reduce energy consumption and running costs approximately 4.2%, switch off the compressor GA110 and GA75 was obtained annual savings of USD 52,947 ≈ 455 714 kWh, running GA75 light load or unloaded then obtained annual savings of USD 31,841≈ 270,685 kWh, install new compressor 2x132 kW and 1x 132 kW VSD obtained annual savings of USD 108,325≈ 928,500 kWh. Furthermore it was needed to conduct study of technical aspect of energy saving potential (Investment Grade Audit) and performed Cost Benefit Analysis. This study was one of best practice solutions how to save energy and improve energy performance in compressors and compressed air system.
Annual Review of Research Under the Joint Service Electronics Program.
1979-10-01
Contents: Quadratic Optimization Problems; Nonlinear Control; Nonlinear Fault Analysis; Qualitative Analysis of Large Scale Systems; Multidimensional System Theory ; Optical Noise; and Pattern Recognition.
Defining a region of optimization based on engine usage data
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-08-04
Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.
Structural damage detection-oriented multi-type sensor placement with multi-objective optimization
NASA Astrophysics Data System (ADS)
Lin, Jian-Fu; Xu, You-Lin; Law, Siu-Seong
2018-05-01
A structural damage detection-oriented multi-type sensor placement method with multi-objective optimization is developed in this study. The multi-type response covariance sensitivity-based damage detection method is first introduced. Two objective functions for optimal sensor placement are then introduced in terms of the response covariance sensitivity and the response independence. The multi-objective optimization problem is formed by using the two objective functions, and the non-dominated sorting genetic algorithm (NSGA)-II is adopted to find the solution for the optimal multi-type sensor placement to achieve the best structural damage detection. The proposed method is finally applied to a nine-bay three-dimensional frame structure. Numerical results show that the optimal multi-type sensor placement determined by the proposed method can avoid redundant sensors and provide satisfactory results for structural damage detection. The restriction on the number of each type of sensors in the optimization can reduce the searching space in the optimization to make the proposed method more effective. Moreover, how to select a most optimal sensor placement from the Pareto solutions via the utility function and the knee point method is demonstrated in the case study.
A Hybrid Remote Sensing Approach for Detecting the Florida Red Tide
NASA Astrophysics Data System (ADS)
Carvalho, G. A.; Minnett, P. J.; Banzon, V.; Baringer, W.
2008-12-01
Harmful algal blooms (HABs) have caused major worldwide economic losses commonly linked with health problems for humans and wildlife. In the Eastern Gulf of Mexico the toxic marine dinoflagellate Karenia brevis is responsible for nearly annual, massive red tides causing fish kills, shellfish poisoning, and acute respiratory irritation in humans: the so-called Florida Red Tide. Near real-time satellite measurements could be an effective method for identifying HABs. The use of space-borne data would be a highly desired, low-cost technique offering the remote and accurate detection of K. brevis blooms over the West Florida Shelf, bringing tremendous societal benefits to the general public, scientific community, resource managers and medical health practitioners. An extensive in situ database provided by the Florida Fish and Wildlife Conservation Commission's Research Institute was used to examine the long-term accuracy of two satellite- based algorithms at detecting the Florida Red Tide. Using MODIS data from 2002 to 2006, the two algorithms are optimized and their accuracy assessed. It has been found that the sequential application of the algorithms results in improved predictability characteristics, correctly identifying ~80% of the cases (for both sensitivity and specificity, as well as overall accuracy), and exhibiting strong positive (70%) and negative (86%) predictive values.
Kramer, Randall A.; Mboera, Leonard E. G.; Senkoro, Kesheni; Lesser, Adriane; Shayo, Elizabeth H.; Paul, Christopher J.; Miranda, Marie Lynn
2014-01-01
The optimization of malaria control strategies is complicated by constraints posed by local health systems, infrastructure, limited resources, and the complex interactions between infection, disease, and treatment. The purpose of this paper is to describe the protocol of a randomized factorial study designed to address this research gap. This project will evaluate two malaria control interventions in Mvomero District, Tanzania: (1) a disease management strategy involving early detection and treatment by community health workers using rapid diagnostic technology; and (2) vector control through community-supported larviciding. Six study villages were assigned to each of four groups (control, early detection and treatment, larviciding, and early detection and treatment plus larviciding). The primary endpoint of interest was change in malaria infection prevalence across the intervention groups measured during annual longitudinal cross-sectional surveys. Recurring entomological surveying, household surveying, and focus group discussions will provide additional valuable insights. At baseline, 962 households across all 24 villages participated in a household survey; 2,884 members from 720 of these households participated in subsequent malariometric surveying. The study design will allow us to estimate the effect sizes of different intervention mixtures. Careful documentation of our study protocol may also serve other researchers designing field-based intervention trials. PMID:24840349
Kramer, Randall A; Mboera, Leonard E G; Senkoro, Kesheni; Lesser, Adriane; Shayo, Elizabeth H; Paul, Christopher J; Miranda, Marie Lynn
2014-05-16
The optimization of malaria control strategies is complicated by constraints posed by local health systems, infrastructure, limited resources, and the complex interactions between infection, disease, and treatment. The purpose of this paper is to describe the protocol of a randomized factorial study designed to address this research gap. This project will evaluate two malaria control interventions in Mvomero District, Tanzania: (1) a disease management strategy involving early detection and treatment by community health workers using rapid diagnostic technology; and (2) vector control through community-supported larviciding. Six study villages were assigned to each of four groups (control, early detection and treatment, larviciding, and early detection and treatment plus larviciding). The primary endpoint of interest was change in malaria infection prevalence across the intervention groups measured during annual longitudinal cross-sectional surveys. Recurring entomological surveying, household surveying, and focus group discussions will provide additional valuable insights. At baseline, 962 households across all 24 villages participated in a household survey; 2,884 members from 720 of these households participated in subsequent malariometric surveying. The study design will allow us to estimate the effect sizes of different intervention mixtures. Careful documentation of our study protocol may also serve other researchers designing field-based intervention trials.
ERIC Educational Resources Information Center
Neugebauer, Roger; Hartzell, Debra
2011-01-01
The year 2010 will not be remembered as a banner year for large for profit child care organizations. But it appears that heading into 2011, optimism has returned. This article presents the twenty-fourth annual status report on for profit child care organizations. In 2010, the total capacity of the three largest for profit chains in North America,…
Urata, Satoko; Kitagawa, Yasuhide; Matsuyama, Satoko; Naito, Renato; Yasuda, Kenji; Mizokami, Atsushi; Namiki, Mikio
2017-04-01
To optimize the rescreening schedule for men with low baseline prostate-specific antigen (PSA) levels, we evaluated men with baseline PSA levels of ≤1.0 ng/mL in PSA-based population screening. We enrolled 8086 men aged 55-69 years with baseline PSA levels of ≤1.0 ng/mL, who were screened annually. The relationships of baseline PSA and age with the cumulative risks and clinicopathological features of screening-detected cancer were investigated. Among the 8086 participants, 28 (0.35 %) and 18 (0.22 %) were diagnosed with prostate cancer and cancer with a Gleason score (GS) of ≥7 during the observation period, respectively. The cumulative probabilities of prostate cancer at 12 years were 0.42, 1.0, 3.4, and 4.3 % in men with baseline PSA levels of 0.0-0.4, 0.5-0.6, 0.7-0.8, and 0.9-1.0 ng/mL, respectively. Those with GS of ≥7 had cumulative probabilities of 0.42, 0.73, 2.8, and 1.9 %, respectively. The cumulative probabilities of prostate cancer were significantly lower when baseline PSA levels were 0.0-0.6 ng/mL compared with 0.7-1.0 ng/mL. Prostate cancer with a GS of ≥7 was not detected during the first 10 years of screening when baseline PSA levels were 0.0-0.6 ng/mL and was not detected during the first 2 years when baseline PSA levels were 0.7-1.0 ng/mL. Our study demonstrated that men with baseline PSA levels of 0.0-0.6 ng/mL might benefit from longer screening intervals than those recommended in the guidelines of the Japanese Urological Association. Further investigation is needed to confirm the optimal screening interval for men with low baseline PSA levels.
Cheng, Wen-Chang
2012-01-01
In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453
Gonnissen, J; De Backer, A; den Dekker, A J; Sijbers, J; Van Aert, S
2016-11-01
In the present paper, the optimal detector design is investigated for both detecting and locating light atoms from high resolution scanning transmission electron microscopy (HR STEM) images. The principles of detection theory are used to quantify the probability of error for the detection of light atoms from HR STEM images. To determine the optimal experiment design for locating light atoms, use is made of the so-called Cramér-Rao Lower Bound (CRLB). It is investigated if a single optimal design can be found for both the detection and location problem of light atoms. Furthermore, the incoming electron dose is optimised for both research goals and it is shown that picometre range precision is feasible for the estimation of the atom positions when using an appropriate incoming electron dose under the optimal detector settings to detect light atoms. Copyright © 2016 Elsevier B.V. All rights reserved.
The use of magnetic resonance mammography in women at increased risk for developing breast cancer
Popiela, Tadeusz J.; Herman-Sucharska, Izabela; Urbanik, Andrzej
2012-01-01
Introduction The use of conventional imaging techniques, namely mammography (MMG) and ultrasound (US), for breast cancer (BC) detection in women at high risk for the disease does not bring optimal results in many cases. Aim The present study evaluated the effectiveness of magnetic resonance (MR) mammography (MRM) in cases where US and MMG failed to detect suspected breast lesions. Material and methods The study group consisted of 379 women who had had no breast pathologies detected by US and MMG. This group was then divided into 4 groups according to the relative risk of breast cancer development. All the women underwent MRM, and any breast pathology detected by MRM was then verified by open surgical biopsy (OSB). Results Based on the MRM findings, 37 women with breast pathologies were identified. All detected pathologies were then classified into one of the BIRADS (Breast Imaging Reporting and Data System) categories. Of these, 33 patients underwent open surgical biopsy. There were a total of 17 benign and 16 malignant breast pathologies that were not visualized by US and MMG. The types of malignancies found, in order of their frequency, were as follows: invasive ductal carcinoma (11 cases), ductal carcinoma in situ (2 cases), invasive lobular carcinoma (2 cases), and lobular carcinoma in situ (1 case). An analysis of MRM effectiveness in detecting BC showed 93.7% sensitivity and 64.71% specificity. Conclusions All women with a 20% or greater lifetime risk of developing BC should undergo annual MRM as a diagnostic adjunct to US and MMG. PMID:23630555
Optimal Inspection of Imports to Prevent Invasive Pest Introduction.
Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G
2018-03-01
The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
McMurtrie, R. E.; Norby, R. J.; Näsholm, T.; Iversen, C.; Dewar, R. C.; Medlyn, B. E.
2011-12-01
Forest free-air CO2 enrichment (FACE) experiments have shown that annual nitrogen (N) uptake increases when trees are grown at elevated CO2 (eCO2) and that increased N uptake is critical for a sustained growth response to eCO2. Processes contributing to increased N uptake at eCO2 may include: accelerated decomposition of soil organic matter due to enhanced root carbon (C) exudation (so-called rhizosphere priming); increased C allocation to fine roots and increased root production at depth, both of which enhance N acquisition; differences in soil N availability with depth; changes in the abundance of N in chemical forms with differing mobility in soil; and reduced N concentrations, reduced maintenance respiration rates, and increased longevities of deeper roots. These processes have been synthesised in a model of annual N uptake in relation to the spatial distribution of roots. We hypothesise that fine roots are distributed spatially in order to maximise annual N uptake. The optimisation hypothesis leads to equations for the optimal vertical distribution of root biomass in relation to the distribution of available soil N and for maximum annual N uptake. We show how maximum N uptake and rooting depth are related to total root mass, and compare the optimal solution with an empirical function that has been fitted to root-distribution data from all terrestrial biomes. Finally, the model is used to explore the consequences of rhizosphere priming at eCO2 as observed at the Duke forest FACE experiment (Drake et al. 2011, Ecology Letters 14: 349-357) and of increasing N limitation over time as observed at the Oak Ridge FACE experiment (Norby et al. 2010, Proc. Nat. Acad. Sci. USA 107: 19368-19373).
Prochazka, Allan V; Lundahl, Kristy; Pearson, Wesley; Oboler, Sylvia K; Anderson, Robert J
2005-06-27
Current evidence does not support an annual screening physical examination for asymptomatic adults, but little is known about primary care provider (PCP) attitudes and practices regarding an annual physical examination. We conducted a postal survey (32 items) of attitudes and practices regarding the annual physical examination (in asymptomatic patients 18 years or older) of a random sample of PCPs (specializing in internal medicine, family practice, and obstetrics/gynecology) from 3 geographic areas (Boston, Mass; Denver, Colo; and San Diego, Calif). Respondents included 783 (47%) of 1679 PCPs. Overall, 430 (65%) of 664 agreed that an annual physical examination is necessary. Three hundred ninety-three (55%) of 712 disagreed with the statement that national organizations do not recommend an annual physical examination, and 641 (88%) of 726 perform such examinations. Most PCPs agreed that an annual physical examination provides time to counsel patients about preventive health services (696/739 [94%]), improves patient-physician relationships (693/737 [94%]), and is desired by most patients (572/737 [78%]). Most also believe that an annual physical examination improves detection of subclinical illness (545/738 [74%]) and is of proven value (461/736 [63%]). Many believed that tests should be part of an annual physical examination, including mammography (44%), a lipid panel (48%), urinalysis (44%), testing of blood glucose level (46%), and complete blood cell count (39%). Despite contrary evidence, most PCPs believe an annual physical examination detects subclinical illness, and many report performing unproven screening laboratory tests. Primary care providers do not appear to accept recommendations that annual physical examinations be abandoned in favor of a more selective approach to preventing health problems.
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2011-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2009-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
Baseline and annual repeat rounds of screening: implications for optimal regimens of screening.
Henschke, Claudia I; Salvatore, Mary; Cham, Matthew; Powell, Charles A; DiFabrizio, Larry; Flores, Raja; Kaufman, Andrew; Eber, Corey; Yip, Rowena; Yankelevitz, David F
2018-03-01
Differences in results of baseline and subsequent annual repeat rounds provide important information for optimising the regimen of screening. A prospective cohort study of 65,374 was reviewed to examine the frequency/percentages of the largest noncalcified nodule (NCN), lung cancer cell types and Kaplan-Meier (K-M) survival rates, separately for baseline and annual rounds. Of 65,374 baseline screenings, NCNs were identified in 28,279 (43.3%); lung cancer in 737 (1.1%). Of 74,482 annual repeat screenings, new NCNs were identified in 4959 (7%); lung cancer in 179 (0.24%). Only adenocarcinoma was diagnosed in subsolid NCNs. Percentages of lung cancers by cell type were significantly different (p < 0.0001) in the baseline round compared with annual rounds, reflecting length bias, as were the ratios, reflecting lead times. Long-term K-M survival rate was 100% for typical carcinoids and for adenocarcinomas manifesting as subsolid NCNs; 85% (95% CI 81-89%) for adenocarcinoma, 74% (95% CI 63-85%) for squamous cell, 48% (95% CI 34-62%) for small cell. The rank ordering by lead time was the same as the rank ordering by survival rates. The significant differences in the frequency of NCNs and frequency and aggressiveness of diagnosed cancers in baseline and annual repeat need to be recognised for an optimal regimen of screening. • Lung cancer aggressiveness varies considerably by cell type and nodule consistency. • Kaplan-Meier survival rates varied by cell type between 100% and 48%. • The percentages of lung cancers by cell type in screening rounds reflect screening biases. • Rank ordering by cell type survival is consistent with that by lead times. • Empirical evidence provides critical information for the regimen of screening.
USDA-ARS?s Scientific Manuscript database
Nitrogen fertilizer is critical to optimize short-term crop yield, but its long-term effect on soil organic C (SOC) is actively debated. Using 60 site-years of maize (Zea mays L.) yield response to a wide range of N fertilizer rates in continuous maize and annually rotated maize-soybean [Glycine max...
Results from the NIST-EPA Interagency Agreement on Measurements and Standards in Aerosol Carbon: Sampling Regional PM2.5 for the Chemometric Optimization of Thermal-Optical Analysis Study will be presented at the American Association for Aerosol Research (AAAR) 24th Annual Confer...
The Army Communications Objectives Measurement System (ACOMS): Survey Design
1988-04-01
monthly basis so that the annual sample includes sufficient Hispanics to detect at the .80 power level: (1) Year-to-year changes of 3% in item...Hispanics. The requirements are listed in terms of power level and must be translated into requisite sample sizes. The requirements are expressed as the...annual samples needed to detect certain differences at the 80% power level. Differences in both directions are to be examined, so that a two-tailed
2015-10-01
DATE : October 2015 TYPE OF REPORT: Annual Report PREPARED FOR: U.S. Army Medical Research and Materiel Command Fort Detrick, Maryland 21702...not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE October 2015 2. REPORT TYPE...Annual 3. DATES COVERED 30Sep2014 - 29Sep2015 Detection and Elimination of Oncogenic Signaling Networks in Premalignant and Malignant Cells with
2015-10-01
DATE : October 2015 TYPE OF REPORT: Annual Report PREPARED FOR: U.S. Army Medical Research and Materiel Command Fort Detrick, Maryland 21702...not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE October 2015 2. REPORT TYPE...Annual 3. DATES COVERED 30Sep2014 - 29Sep2015 Detection and Elimination of Oncogenic Signaling Networks in Premalignant and Malignant Cells with
Network anomaly detection system with optimized DS evidence theory.
Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu
2014-01-01
Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network-complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each sensor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly.
Network Anomaly Detection System with Optimized DS Evidence Theory
Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu
2014-01-01
Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network—complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each senor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly. PMID:25254258
Military Free Fall Scheduling And Manifest Optimization Model
2016-12-01
engines running waiting for the next student load. The annual blade hour cost, which consists of fuel, maintenance, and personnel, is $5.6M for FY-16...tarmac with engines running waiting for the next student load (J. Enke, personal communication, 2016). The annual blade hour cost, which consists of...33 Scenario 2 Nonstandard Run #1 C-27 Two Passes per Lift .......................34 Table 9. xii THIS PAGE INTENTIONALLY LEFT BLANK xiii
Prospects for detection of target-dependent annual modulation in direct dark matter searches
Nobile, Eugenio Del; Gelmini, Graciela B.; Witte, Samuel J.
2016-02-03
Earth's rotation about the Sun produces an annual modulation in the expected scattering rate at direct dark matter detection experiments. The annual modulation as a function of the recoil energy E R imparted by the dark matter particle to a target nucleus is expected to vary depending on the detector material. However, for most interactions a change of variables from E R to v min, the minimum speed a dark matter particle must have to impart a fixed E R to a target nucleus, produces an annual modulation independent of the target element. We recently showed that if the darkmore » matter-nucleus cross section contains a non-factorizable target and dark matter velocity dependence, the annual modulation as a function of v min can be target dependent. Here we examine more extensively the necessary conditions for target-dependent modulation, its observability in present-day experiments, and the extent to which putative signals could identify a dark matter-nucleus differential cross section with a non-factorizable dependence on the dark matter velocity.« less
Final results of Borexino Phase-I on low-energy solar neutrino spectroscopy
NASA Astrophysics Data System (ADS)
Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Bravo, D.; Buizza Avanzini, M.; Caccianiga, B.; Cadonati, L.; Calaprice, F.; Cavalcante, P.; Chavarria, A.; Chepurnov, A.; D'Angelo, D.; Davini, S.; Derbin, A.; Empl, A.; Etenko, A.; Fomenko, K.; Franco, D.; Gabriele, F.; Galbiati, C.; Gazzana, S.; Ghiano, C.; Giammarchi, M.; Göger-Neff, M.; Goretti, A.; Grandi, L.; Gromov, M.; Hagner, C.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Lewke, T.; Litvinovich, E.; Loer, B.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Manecki, S.; Maneschg, W.; Manuzio, G.; Meindl, Q.; Meroni, E.; Miramonti, L.; Misiaszek, M.; Montuschi, M.; Mosteiro, P.; Muratova, V.; Oberauer, L.; Obolensky, M.; Ortica, F.; Otis, K.; Pallavicini, M.; Papp, L.; Pena-Garay, C.; Perasso, L.; Perasso, S.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Rossi, N.; Saldanha, R.; Salvo, C.; Schönert, S.; Simgen, H.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Sukhotin, S.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Vignaud, D.; Vogelaar, R. B.; von Feilitzsch, F.; Winter, J.; Wojcik, M.; Wright, A.; Wurm, M.; Xu, J.; Zaimidoroga, O.; Zavatarelli, S.; Zuzel, G.; Borexino Collaboration
2014-06-01
Borexino has been running since May 2007 at the Laboratori Nazionali del Gran Sasso laboratory in Italy with the primary goal of detecting solar neutrinos. The detector, a large, unsegmented liquid scintillator calorimeter characterized by unprecedented low levels of intrinsic radioactivity, is optimized for the study of the lower energy part of the spectrum. During Phase-I (2007-2010), Borexino first detected and then precisely measured the flux of the Be7 solar neutrinos, ruled out any significant day-night asymmetry of their interaction rate, made the first direct observation of the pep neutrinos, and set the tightest upper limit on the flux of solar neutrinos produced in the CNO cycle (carbon, nitrogen, oxigen) where carbon, nitrogen, and oxygen serve as catalysts in the fusion process. In this paper we discuss the signal signature and provide a comprehensive description of the backgrounds, quantify their event rates, describe the methods for their identification, selection, or subtraction, and describe data analysis. Key features are an extensive in situ calibration program using radioactive sources, the detailed modeling of the detector response, the ability to define an innermost fiducial volume with extremely low background via software cuts, and the excellent pulse-shape discrimination capability of the scintillator that allows particle identification. We report a measurement of the annual modulation of the Be7 neutrino interaction rate. The period, the amplitude, and the phase of the observed modulation are consistent with the solar origin of these events, and the absence of their annual modulation is rejected with higher than 99% C.L. The physics implications of Phase-I results in the context of the neutrino oscillation physics and solar models are presented.
Constraining Night Time Ecosystem Respiration by Inverse Approaches
NASA Astrophysics Data System (ADS)
Juang, J.; Stoy, P. C.; Siqueira, M. B.; Katul, G. G.
2004-12-01
Estimating nighttime ecosystem respiration remains a key challenge in quantifying ecosystem carbon budgets. Currently, nighttime eddy-covariance (EC) flux measurements are plagued by uncertainties often attributed to poor mixing within the canopy volume, non-turbulent transport of CO2 into and out of the canopy, and non-stationarity and intermittency. Here, we explore the use of second-order closure models to estimate nighttime ecosystem respiration by mathematically linking sources of CO2 to mean concentration profiles via the continuity and the CO2 flux budget equation modified to include thermal stratification. By forcing this model to match, in a root-mean squared sense, the nighttime measured mean CO2 concentration profiles within the canopy the above ground CO2 production and forest floor respiration can be estimated via multi-dimensional optimization techniques. We show that in a maturing pine and a mature hardwood forest, these optimized CO2 sources are (1) consistently larger than the eddy covariance flux measurements above the canopy, and (2) agree well with chamber-based measurements. We also show that by linking the optimized nighttime ecosystem respiration to temperature measurements, the estimated annual ecosystem respiration from this approach agrees well with biometric estimates, at least when compared to eddy-covariance methods conditioned on a friction velocity threshold. The difference between the annual ecosystem respiration obtained by this optimization method and the friction-velocity thresholded night-time EC fluxes can be as large as 700 g C m-2 (in 2003) for the maturing pine forest, which is about 40% of the ecosystem respiration. For 2001 and 2002, the annual ecosystem respiration differences between the EC-based and the proposed approach were on the order of 300 to 400 g C m-2.
Modeling urban air pollution with optimized hierarchical fuzzy inference system.
Tashayo, Behnam; Alimohammadi, Abbas
2016-10-01
Environmental exposure assessments (EEA) and epidemiological studies require urban air pollution models with appropriate spatial and temporal resolutions. Uncertain available data and inflexible models can limit air pollution modeling techniques, particularly in under developing countries. This paper develops a hierarchical fuzzy inference system (HFIS) to model air pollution under different land use, transportation, and meteorological conditions. To improve performance, the system treats the issue as a large-scale and high-dimensional problem and develops the proposed model using a three-step approach. In the first step, a geospatial information system (GIS) and probabilistic methods are used to preprocess the data. In the second step, a hierarchical structure is generated based on the problem. In the third step, the accuracy and complexity of the model are simultaneously optimized with a multiple objective particle swarm optimization (MOPSO) algorithm. We examine the capabilities of the proposed model for predicting daily and annual mean PM2.5 and NO2 and compare the accuracy of the results with representative models from existing literature. The benefits provided by the model features, including probabilistic preprocessing, multi-objective optimization, and hierarchical structure, are precisely evaluated by comparing five different consecutive models in terms of accuracy and complexity criteria. Fivefold cross validation is used to assess the performance of the generated models. The respective average RMSEs and coefficients of determination (R (2)) for the test datasets using proposed model are as follows: daily PM2.5 = (8.13, 0.78), annual mean PM2.5 = (4.96, 0.80), daily NO2 = (5.63, 0.79), and annual mean NO2 = (2.89, 0.83). The obtained results demonstrate that the developed hierarchical fuzzy inference system can be utilized for modeling air pollution in EEA and epidemiological studies.
The use of least squares methods in functional optimization of energy use prediction models
NASA Astrophysics Data System (ADS)
Bourisli, Raed I.; Al-Shammeri, Basma S.; AlAnzi, Adnan A.
2012-06-01
The least squares method (LSM) is used to optimize the coefficients of a closed-form correlation that predicts the annual energy use of buildings based on key envelope design and thermal parameters. Specifically, annual energy use is related to a number parameters like the overall heat transfer coefficients of the wall, roof and glazing, glazing percentage, and building surface area. The building used as a case study is a previously energy-audited mosque in a suburb of Kuwait City, Kuwait. Energy audit results are used to fine-tune the base case mosque model in the VisualDOE{trade mark, serif} software. Subsequently, 1625 different cases of mosques with varying parameters were developed and simulated in order to provide the training data sets for the LSM optimizer. Coefficients of the proposed correlation are then optimized using multivariate least squares analysis. The objective is to minimize the difference between the correlation-predicted results and the VisualDOE-simulation results. It was found that the resulting correlation is able to come up with coefficients for the proposed correlation that reduce the difference between the simulated and predicted results to about 0.81%. In terms of the effects of the various parameters, the newly-defined weighted surface area parameter was found to have the greatest effect on the normalized annual energy use. Insulating the roofs and walls also had a major effect on the building energy use. The proposed correlation and methodology can be used during preliminary design stages to inexpensively assess the impacts of various design variables on the expected energy use. On the other hand, the method can also be used by municipality officials and planners as a tool for recommending energy conservation measures and fine-tuning energy codes.
Effects on noise properties of GPS time series caused by higher-order ionospheric corrections
NASA Astrophysics Data System (ADS)
Jiang, Weiping; Deng, Liansheng; Li, Zhao; Zhou, Xiaohui; Liu, Hongfei
2014-04-01
Higher-order ionospheric (HOI) effects are one of the principal technique-specific error sources in precise global positioning system (GPS) analysis. These effects also influence the non-linear characteristics of GPS coordinate time series. In this paper, we investigate these effects on coordinate time series in terms of seasonal variations and noise amplitudes. Both power spectral techniques and maximum likelihood estimators (MLE) are used to evaluate these effects quantitatively and qualitatively. Our results show an overall improvement for the analysis of global sites if HOI effects are considered. We note that the noise spectral index that is used for the determination of the optimal noise models in our analysis ranged between -1 and 0 both with and without HOI corrections, implying that the coloured noise cannot be removed by these corrections. However, the corrections were found to have improved noise properties for global sites. After the corrections were applied, the noise amplitudes at most sites decreased, among which the white noise amplitudes decreased remarkably. The white noise amplitudes of up to 81.8% of the selected sites decreased in the up component, and the flicker noise of 67.5% of the sites decreased in the north component. Stacked periodogram results show that, no matter whether the HOI effects are considered or not, a common fundamental period of 1.04 cycles per year (cpy), together with the expected annual and semi-annual signals, can explain all peaks of the north and up components well. For the east component, however, reasonable results can be obtained only based on HOI corrections. HOI corrections are useful for better detecting the periodic signals in GPS coordinate time series. Moreover, the corrections contributed partly to the seasonal variations of the selected sites, especially for the up component. Statistically, HOI corrections reduced more than 50% and more than 65% of the annual and semi-annual amplitudes respectively at the selected sites.
2015-08-31
University REPORT DATE : August 31, 2015 TYPE OF REPORT: Annual, Year 2 PREPARED FOR: U.S. Army Medical Research and Materiel Command Fort Detrick...if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE August 31, 2015 2...REPORT TYPE Annual 3. DATES COVERED August 01, 2014 - July 31, 2015 4. TITLE AND SUBTITLE Radiolabeled Exosomes for the Early Detection of
ERIC Educational Resources Information Center
Wollmer, Richard D.; Bond, Nicholas A.
Two computer-assisted instruction programs were written in electronics and trigonometry to test the Wollmer Markov Model for optimizing hierarchial learning; calibration samples totalling 110 students completed these programs. Since the model postulated that transfer effects would be a function of the amount of practice, half of the students were…
46 CFR 71.25-20 - Fire-detecting and extinguishing equipment.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 3 2010-10-01 2010-10-01 false Fire-detecting and extinguishing equipment. 71.25-20... INSPECTION AND CERTIFICATION Annual Inspection § 71.25-20 Fire-detecting and extinguishing equipment. (a) At... detecting and extinguishing equipment: (1) All hand portable fire extinguishers and semiportable fire...
An 11-Year Surveillance of HIV Type 1 Subtypes in Nagoya, Japan.
Fujisaki, Seiichiro; Ibe, Shiro; Hattori, Junko; Shigemi, Urara; Fujisaki, Saeko; Shimizu, Kayoko; Nakamura, Kazuyo; Yokomaku, Yoshiyuki; Mamiya, Naoto; Utsumi, Makoto; Hamaguchi, Motohiro; Kaneda, Tsuguhiro
2009-01-01
Abstract To monitor active HIV-1 transmission in Nagoya, Japan, we have been determining the subtypes of HIV-1 infecting therapy-naive individuals who have newly visited the Nagoya Medical Center since 1997. The subtypes were determined by phylogenetic analyses using the base sequences in three regions of the HIV-1 genes including gag p17, pol protease (PR) and reverse transcriptase (RT), and env C2V3. Almost all HIV-1 subtypes from 1997 to 2007 and 93% of all HIV-1 isolates in 2007 were subtype B. HIV-1 subtypes A, C, D, and F have been detected sporadically since 1997, almost all in Africans and South Americans. The first detected circulating recombinant form (CRF ) was CRF01_AE (11-year average annual detection rate, 7.7%). Only two cases of CRF02_AG were detected in 2006. A unique recombinant form (URF ) was first detected in 1998 and the total number of URFs reached 25 by year 2007 (average annual detection rate, 4.7%). Eleven of these 25 were detected from 2000 to 2005 and had subtypes AE/B/AE as determined by base sequencing of the gag p17, pol PR and RT, and env C2V3 genes (average annual detection rate, 3.7%). Unique subtype B has been detected in six cases since 2006. All 17 of these patients were Japanese. Other recombinant HIV-1s have been detected intermittently in eight cases since 1998. During the 11-year surveillance, most HIV-1s in Nagoya, Japan were of subtype B. We expect that subtype B HIV-1 will continue to predominate for the next several years. Active recombination between subtype B and CRF01_AE HIV-1 and its transmission were also shown.
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
Anomaly detection for analysis of annual inventory data: a quality control approach
Francis A. Roesch; Paul C. Van Deusen
2010-01-01
Annual forest inventories present special challenges and opportunities for those analyzing the data arising from them. Here, we address one question currently being asked by analysts of the US Forest Serviceâs Forest Inventory and Analysis Programâs quickly accumulating annual inventory data. The question is simple but profound: When combining the next yearâs data for...
Multidisciplinary Analysis and Optimization Generation 1 and Next Steps
NASA Technical Reports Server (NTRS)
Naiman, Cynthia Gutierrez
2008-01-01
The Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed three major milestones during Fiscal Year (FY)08: "Requirements Definition" Milestone (1/31/08); "GEN 1 Integrated Multi-disciplinary Toolset" (Annual Performance Goal) (6/30/08); and "Define Architecture & Interfaces for Next Generation Open Source MDAO Framework" Milestone (9/30/08). Details of all three milestones are explained including documentation available, potential partner collaborations, and next steps in FY09.
Residential solar-heating system uses pyramidal optics
NASA Technical Reports Server (NTRS)
1981-01-01
Report describes reflective panels which optimize annual solar energy collection in attic installation. Subunits include collection, storage, distribution, and 4-mode control systems. Pyramid optical system heats single-family and multi-family dwellings.
Optimal design of reverse osmosis module networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maskan, F.; Wiley, D.E.; Johnston, L.P.M.
2000-05-01
The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found thatmore » optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.« less
Selecting a proper design period for heliostat field layout optimization using Campo code
NASA Astrophysics Data System (ADS)
Saghafifar, Mohammad; Gadalla, Mohamed
2016-09-01
In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.
Multiple Detector Optimization for Hidden Radiation Source Detection
2015-03-26
important in achieving operationally useful methods for optimizing detector emplacement, the 2-D attenuation model approach promises to speed up the...process of hidden source detection significantly. The model focused on detection of the full energy peak of a radiation source. Methods to optimize... radioisotope identification is possible without using a computationally intensive stochastic model such as the Monte Carlo n-Particle (MCNP) code
Modeling Limited Foresight in Water Management Systems
NASA Astrophysics Data System (ADS)
Howitt, R.
2005-12-01
The inability to forecast future water supplies means that their management inevitably occurs under situations of limited foresight. Three modeling problems arise, first what type of objective function is a manager with limited foresight optimizing? Second how can we measure these objectives? Third can objective functions that incorporate uncertainty be integrated within the structure of optimizing water management models? The paper reviews the concepts of relative risk aversion and intertemporal substitution that underlie stochastic dynamic preference functions. Some initial results from the estimation of such functions for four different dam operations in northern California are presented and discussed. It appears that the path of previous water decisions and states influences the decision-makers willingness to trade off water supplies between periods. A compromise modeling approach that incorporates carry-over value functions under limited foresight within a broader net work optimal water management model is developed. The approach uses annual carry-over value functions derived from small dimension stochastic dynamic programs embedded within a larger dimension water allocation network. The disaggregation of the carry-over value functions to the broader network is extended using the space rule concept. Initial results suggest that the solution of such annual nonlinear network optimizations is comparable to, or faster than, the solution of linear network problems over long time series.
Optimization of a middle atmosphere diagnostic scheme
NASA Astrophysics Data System (ADS)
Akmaev, Rashid A.
1997-06-01
A new assimilative diagnostic scheme based on the use of a spectral model was recently tested on the CIRA-86 empirical model. It reproduced the observed climatology with an annual global rms temperature deviation of 3.2 K in the 15-110 km layer. The most important new component of the scheme is that the zonal forcing necessary to maintain the observed climatology is diagnosed from empirical data and subsequently substituted into the simulation model at the prognostic stage of the calculation in an annual cycle mode. The simulation results are then quantitatively compared with the empirical model, and the above mentioned rms temperature deviation provides an objective measure of the `distance' between the two climatologies. This quantitative criterion makes it possible to apply standard optimization procedures to the whole diagnostic scheme and/or the model itself. The estimates of the zonal drag have been improved in this study by introducing a nudging (Newtonian-cooling) term into the thermodynamic equation at the diagnostic stage. A proper optimal adjustment of the strength of this term makes it possible to further reduce the rms temperature deviation of simulations down to approximately 2.7 K. These results suggest that direct optimization can successfully be applied to atmospheric model parameter identification problems of moderate dimensionality.
NASA Astrophysics Data System (ADS)
Truchelut, R.; Hart, R. E.
2013-12-01
While a number of research groups offer quantitative pre-seasonal assessments of aggregate annual Atlantic Basin tropical cyclone activity, the literature is comparatively thin concerning methods to meaningfully quantify seasonal U.S. landfall risks. As the example of Hurricane Andrew impacting Southeast Florida in the otherwise quiet 1992 season demonstrates, an accurate probabilistic assessment of seasonal tropical cyclone threat levels would be of immense public utility and economic value; however, the methods used to predict annual activity demonstrate little skill for predicting annual count of landfalling systems of any intensity bin. Therefore, while current models are optimized to predict cumulative seasonal tropical cyclone activity, they are not ideal tools for assessing the potential for sensible impacts of storms on populated areas. This research aims to bridge the utility gap in seasonal tropical cyclone forecasting by shifting the focus of seasonal modelling to the parameters that are most closely linked to creating conditions favorable for U.S. landfalls, particularly those of destructive and costly intense hurricanes. As it is clear from the initial findings of this study that overall activity has a limited influence on sensible outcomes, this project concentrates on detecting predictability and trends in cyclogenesis location and upper-level wind steering patterns. These metrics are demonstrated to have a relationship with landfall activity in the Atlantic Basin climatological record. By aggregating historic seasonally-averaged steering patterns using newly-available reanalysis model datasets, some atmospheric and oceanic precursors to an elevated risk of North American tropical cyclone landfall have been identified. Work is ongoing to quantify the variance, persistence, and predictability of such patterns over seasonal timescales, with the aim of yielding tools that could be incorporated into tropical cyclone risk mitigation strategies.
Rosenthal, Adam N.; Fraser, Lindsay; Manchanda, Ranjit; Badman, Philip; Philpott, Susan; Mozersky, Jessica; Hadwin, Richard; Cafferty, Fay H.; Benjamin, Elizabeth; Singh, Naveena; Evans, D. Gareth; Eccles, Diana M.; Skates, Steven J.; Mackay, James; Menon, Usha; Jacobs, Ian J.
2013-01-01
Purpose To establish the performance characteristics of annual transvaginal ultrasound and serum CA125 screening for women at high risk of ovarian/fallopian tube cancer (OC/FTC) and to investigate the impact of delayed screening interval and surgical intervention. Patients and Methods Between May 6, 2002, and January 5, 2008, 3,563 women at an estimated ≥ 10% lifetime risk of OC/FTC were recruited and screened by 37 centers in the United Kingdom. Participants were observed prospectively by centers, questionnaire, and national cancer registries. Results Sensitivity for detection of incident OC/FTC at 1 year after last annual screen was 81.3% (95% CI, 54.3% to 96.0%) if occult cancers were classified as false negatives and 87.5% (95% CI, 61.7% to 98.5%) if they were classified as true positives. Positive and negative predictive values of incident screening were 25.5% (95% CI, 14.3 to 40.0) and 99.9% (95% CI, 99.8 to 100) respectively. Four (30.8%) of 13 incident screen-detected OC/FTCs were stage I or II. Compared with women screened in the year before diagnosis, those not screened in the year before diagnosis were more likely to have ≥ stage IIIc disease (85.7% v 26.1%; P = .009). Screening interval was delayed by a median of 88 days before detection of incident OC/FTC. Median interval from detection screen to surgical intervention was 79 days in prevalent and incident OC/FTC. Conclusion These results in the high-risk population highlight the need for strict adherence to screening schedule. Screening more frequently than annually with prompt surgical intervention seems to offer a better chance of early-stage detection. PMID:23213100
Effect of gravitational focusing on annual modulation in dark-matter direct-detection experiments.
Lee, Samuel K; Lisanti, Mariangela; Peter, Annika H G; Safdi, Benjamin R
2014-01-10
The scattering rate in dark-matter direct-detection experiments should modulate annually due to Earth's orbit around the Sun. The rate is typically thought to be extremized around June 1, when the relative velocity of Earth with respect to the dark-matter wind is maximal. We point out that gravitational focusing can alter this modulation phase. Unbound dark-matter particles are focused by the Sun's gravitational potential, affecting their phase-space density in the lab frame. Gravitational focusing can result in a significant overall shift in the annual-modulation phase, which is most relevant for dark matter with low scattering speeds. The induced phase shift for light O(10) GeV dark matter may also be significant, depending on the threshold energy of the experiment.
DEMONSTRATION AND TESTING OF AN EER OPTIMIZER SYSTEM FOR DX AIR-CONDITIONERS
2017-10-07
Performance-Based Maintenance PCS Power Current Sensor PLC Programmable Logic Controller ppm Parts Per Million PSIG Pounds per Square Inch Gauge PVS Power...all utilities and facilities at Patrick AFB, Cape Canaveral AFS, Jonathan Dickinson Military Tracking Annex, Malabar Annex, Ramey Solar Observatory...Cost 8,057 0 Annual O&M Cost 453 1191 Annual FD&D Monitoring 880 ‐ BLCC LIFE CYCLE RESULTS Energy Savings $12,317 O&M Net Savings $493 PV Life Cycle
Estimation and detection information trade-off for x-ray system optimization
NASA Astrophysics Data System (ADS)
Cushing, Johnathan B.; Clarkson, Eric W.; Mandava, Sagar; Bilgin, Ali
2016-05-01
X-ray Computed Tomography (CT) systems perform complex imaging tasks to detect and estimate system parameters, such as a baggage imaging system performing threat detection and generating reconstructions. This leads to a desire to optimize both the detection and estimation performance of a system, but most metrics only focus on one of these aspects. When making design choices there is a need for a concise metric which considers both detection and estimation information parameters, and then provides the user with the collection of possible optimal outcomes. In this paper a graphical analysis of Estimation and Detection Information Trade-off (EDIT) will be explored. EDIT produces curves which allow for a decision to be made for system optimization based on design constraints and costs associated with estimation and detection. EDIT analyzes the system in the estimation information and detection information space where the user is free to pick their own method of calculating these measures. The user of EDIT can choose any desired figure of merit for detection information and estimation information then the EDIT curves will provide the collection of optimal outcomes. The paper will first look at two methods of creating EDIT curves. These curves can be calculated using a wide variety of systems and finding the optimal system by maximizing a figure of merit. EDIT could also be found as an upper bound of the information from a collection of system. These two methods allow for the user to choose a method of calculation which best fits the constraints of their actual system.
Effects of livestock watering sites on alien and native plants in the Mojave Desert, USA
Brooks, M.L.; Matchett, J.R.; Berry, K.H.
2006-01-01
Increased livestock densities near artificial watering sites create disturbance gradients called piospheres. We studied responses of alien and native annual plants and native perennial plants within 9 piospheres in the Mojave Desert of North America. Absolute and proportional cover of alien annual plants increased with proximity to watering sites, whereas cover and species richness of native annual plants decreased. Not all alien species responded the same, as the alien forb Erodium cicutarium and the alien grass Schismus spp. increased with proximity to watering sites, and the alien annual grass Bromus madritensis ssp. rubens decreased. Perennial plant cover and species richness also declined with proximity to watering sites, as did the structural diversity of perennial plant cover classes. Significant effects were focused within 200 m of the watering sites, suggesting that control efforts for alien annual plants and restoration efforts for native plants should optimally be focused within this central part of the piosphere gradient.
Investigate plow blade optimization : executive summary.
DOT National Transportation Integrated Search
2015-08-01
Snow and ice management is the single largest expenditure in the maintenance budget for the Ohio : Department of Transportation (ODOT), with an annual cost including labor, equipment, and materials : reaching approximately $86 million. Given the curr...
Optimizing traffic counting procedures.
DOT National Transportation Integrated Search
1986-01-01
Estimates of annual average daily traffic volumes are important in the planning and operations of state highway departments. These estimates are used in the planning of new construction and improvement of existing facilities, and, in some cases, in t...
Cortical membrane potential signature of optimal states for sensory signal detection
McGinley, Matthew J.; David, Stephen V.; McCormick, David A.
2015-01-01
The neural correlates of optimal states for signal detection task performance are largely unknown. One hypothesis holds that optimal states exhibit tonically depolarized cortical neurons with enhanced spiking activity, such as occur during movement. We recorded membrane potentials of auditory cortical neurons in mice trained on a challenging tone-in-noise detection task while assessing arousal with simultaneous pupillometry and hippocampal recordings. Arousal measures accurately predicted multiple modes of membrane potential activity, including: rhythmic slow oscillations at low arousal, stable hyperpolarization at intermediate arousal, and depolarization during phasic or tonic periods of hyper-arousal. Walking always occurred during hyper-arousal. Optimal signal detection behavior and sound-evoked responses, at both sub-threshold and spiking levels, occurred at intermediate arousal when pre-decision membrane potentials were stably hyperpolarized. These results reveal a cortical physiological signature of the classically-observed inverted-U relationship between task performance and arousal, and that optimal detection exhibits enhanced sensory-evoked responses and reduced background synaptic activity. PMID:26074005
Optimal use of land surface temperature data to detect changes in tropical forest cover
NASA Astrophysics Data System (ADS)
Van Leeuwen, T. T.; Frank, A. J.; Jin, Y.; Smyth, P.; Goulden, M.; van der Werf, G.; Randerson, J. T.
2011-12-01
Rapid and accurate assessment of global forest cover change is needed to focus conservation efforts and to better understand how deforestation is contributing to the build up of atmospheric CO2. Here we examined different ways to use remotely sensed land surface temperature (LST) to detect changes in tropical forest cover. In our analysis we used monthly 0.05×0.05 degree Terra MODerate Resolution Imaging Spectroradiometer (MODIS) observations of LST and PRODES (Program for the Estimation of Deforestation in the Brazilian Amazon) estimates of forest cover change. We also compared MODIS LST observations with an independent estimate of forest cover loss derived from MODIS and Landsat observations. Our study domain of approximately 10×10 degree included most of the Brazilian state of Mato Grosso. For optimal use of LST data to detect changes in tropical forest cover in our study area, we found that using data sampled during the end of the dry season (~1-2 months after minimum monthly precipitation) had the greatest predictive skill. During this part of the year, precipitation was low, surface humidity was at a minimum, and the difference between day and night LST was the largest. We used this information to develop a simple temporal sampling algorithm appropriate for use in pan-tropical deforestation classifiers. Combined with the normalized difference vegetation index (NDVI), a logistic regression model using day-night LST did moderately well at predicting forest cover change. Annual changes in day-night LST difference decreased during 2006-2009 relative to 2001-2005 in many regions within the Amazon, providing independent confirmation of lower deforestation levels during the latter part of this decade as reported by PRODES. The use of day-night LST differences may be particularly valuable for use with satellites that do not have spectral bands that allow for the estimation of NDVI or other vegetation indices.
Debaize, Lydie; Jakobczyk, Hélène; Rio, Anne-Gaëlle; Gandemer, Virginie; Troadec, Marie-Bérengère
2017-01-01
Genetic abnormalities, including chromosomal translocations, are described for many hematological malignancies. From the clinical perspective, detection of chromosomal abnormalities is relevant not only for diagnostic and treatment purposes but also for prognostic risk assessment. From the translational research perspective, the identification of fusion proteins and protein interactions has allowed crucial breakthroughs in understanding the pathogenesis of malignancies and consequently major achievements in targeted therapy. We describe the optimization of the Proximity Ligation Assay (PLA) to ascertain the presence of fusion proteins, and protein interactions in non-adherent pre-B cells. PLA is an innovative method of protein-protein colocalization detection by molecular biology that combines the advantages of microscopy with the advantages of molecular biology precision, enabling detection of protein proximity theoretically ranging from 0 to 40 nm. We propose an optimized PLA procedure. We overcome the issue of maintaining non-adherent hematological cells by traditional cytocentrifugation and optimized buffers, by changing incubation times, and modifying washing steps. Further, we provide convincing negative and positive controls, and demonstrate that optimized PLA procedure is sensitive to total protein level. The optimized PLA procedure allows the detection of fusion proteins and protein interactions on non-adherent cells. The optimized PLA procedure described here can be readily applied to various non-adherent hematological cells, from cell lines to patients' cells. The optimized PLA protocol enables detection of fusion proteins and their subcellular expression, and protein interactions in non-adherent cells. Therefore, the optimized PLA protocol provides a new tool that can be adopted in a wide range of applications in the biological field.
Parallel Molecular Distributed Detection With Brownian Motion.
Rogers, Uri; Koh, Min-Sung
2016-12-01
This paper explores the in vivo distributed detection of an undesired biological agent's (BAs) biomarkers by a group of biological sized nanomachines in an aqueous medium under drift. The term distributed, indicates that the system information relative to the BAs presence is dispersed across the collection of nanomachines, where each nanomachine possesses limited communication, computation, and movement capabilities. Using Brownian motion with drift, a probabilistic detection and optimal data fusion framework, coined molecular distributed detection, will be introduced that combines theory from both molecular communication and distributed detection. Using the optimal data fusion framework as a guide, simulation indicates that a sub-optimal fusion method exists, allowing for a significant reduction in implementation complexity while retaining BA detection accuracy.
NASA Astrophysics Data System (ADS)
Gromov, V. A.; Sharygin, G. S.; Mironov, M. V.
2012-08-01
An interval method of radar signal detection and selection based on non-energetic polarization parameter - the ellipticity angle - is suggested. The examined method is optimal by the Neumann-Pearson criterion. The probability of correct detection for a preset probability of false alarm is calculated for different signal/noise ratios. Recommendations for optimization of the given method are provided.
Learning optimal embedded cascades.
Saberian, Mohammad Javad; Vasconcelos, Nuno
2012-10-01
The problem of automatic and optimal design of embedded object detector cascades is considered. Two main challenges are identified: optimization of the cascade configuration and optimization of individual cascade stages, so as to achieve the best tradeoff between classification accuracy and speed, under a detection rate constraint. Two novel boosting algorithms are proposed to address these problems. The first, RCBoost, formulates boosting as a constrained optimization problem which is solved with a barrier penalty method. The constraint is the target detection rate, which is met at all iterations of the boosting process. This enables the design of embedded cascades of known configuration without extensive cross validation or heuristics. The second, ECBoost, searches over cascade configurations to achieve the optimal tradeoff between classification risk and speed. The two algorithms are combined into an overall boosting procedure, RCECBoost, which optimizes both the cascade configuration and its stages under a detection rate constraint, in a fully automated manner. Extensive experiments in face, car, pedestrian, and panda detection show that the resulting detectors achieve an accuracy versus speed tradeoff superior to those of previous methods.
NASA Astrophysics Data System (ADS)
Mayfield, E. N.; Robinson, A. L.; Cohon, J. L.
2017-12-01
This work assesses trade-offs between system-wide and superemitter policy options for reducing methane emissions from compressor stations in the U.S. transmission and storage system. Leveraging recently collected national emissions and activity data sets, we developed a new process-based emissions model implemented in a Monte Carlo simulation framework to estimate emissions for each component and facility in the system. We find that approximately 83% of emissions, given the existing suite of technologies, have the potential to be abated, with only a few emission categories comprising a majority of emissions. We then formulate optimization models to determine optimal abatement strategies. Most emissions across the system (approximately 80%) are efficient to abate, resulting in net benefits ranging from 160M to 1.2B annually across the system. The private cost burden is minimal under standard and tax instruments, and if firms market the abated natural gas, private net benefits may be generated. Superemitter policies, namely, those that target the highest emitting facilities, may reduce the private cost burden and achieve high emission reductions, especially if emissions across facilities are highly skewed. However, detection across all facilities is necessary regardless of the policy option and there are nontrivial net benefits resulting from abatement of relatively low-emitting sources.
Mayfield, Erin N; Robinson, Allen L; Cohon, Jared L
2017-05-02
This work assesses trade-offs between system-wide and superemitter policy options for reducing methane emissions from compressor stations in the U.S. transmission and storage system. Leveraging recently collected national emissions and activity data sets, we developed a new process-based emissions model implemented in a Monte Carlo simulation framework to estimate emissions for each component and facility in the system. We find that approximately 83% of emissions, given the existing suite of technologies, have the potential to be abated, with only a few emission categories comprising a majority of emissions. We then formulate optimization models to determine optimal abatement strategies. Most emissions across the system (approximately 80%) are efficient to abate, resulting in net benefits ranging from $160M to $1.2B annually across the system. The private cost burden is minimal under standard and tax instruments, and if firms market the abated natural gas, private net benefits may be generated. Superemitter policies, namely, those that target the highest emitting facilities, may reduce the private cost burden and achieve high emission reductions, especially if emissions across facilities are highly skewed. However, detection across all facilities is necessary regardless of the policy option and there are nontrivial net benefits resulting from abatement of relatively low-emitting sources.
Naturally fractured tight gas reservoir detection optimization. Quarterly report, April--June 1994
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-07-01
Geologic assessment of the basin during the third quarter possessed several major objectives. The first task was to test the validity of the gas-centered basin model for the Piceance Basin. The second objective was to define the location and variability of gas-saturated zones within the Williams Fork and Iles Formation reservoir horizons. A third objective was to prepare an updated structure map of the Piceance Basin on the top of the Iles Formation (Rollins Sandstone) to take advantage of new data provided by ten years of drilling activity throughout the basin. The first two objectives formed the core of themore » ARI poster session presented at the AAPG annual meeting in Denver. The delineation of the gas and water-saturated zone geometries for the Williams Fork and Iles Formations in the basin was presented in the form of a poster session at the AAPG Annual meeting held in Denver in mid-June. The poster session outlined the nature of the gas-centered basin geometry and demonstrated the gas and water-saturated conditions for the Williams Fork, Cozzette and Corcoran reservoir horizons throughout the basin. Initial and cumulative production data indicate that these reservoir horizons are gas-saturated in most of the south-central and eastern basin. The attached report summarizes the data and conclusions of the poster session.« less
Winch, Caleb J; Sherman, Kerry A; Boyages, John
2015-01-01
This study aimed to: (1) Estimate cumulative risk of recall from breast screening where no cancer is detected (a harm) in Australia; (2) Compare women screened annually versus biennially, commencing age 40 versus 50; and (3) Compare with international findings. At the no-cost metropolitan program studied, women attended biennial screening, but were offered annual screening if regarded at elevated risk for breast cancer. The cumulative risk of at least one recall was estimated using discrete-time survival analysis. Cancer detection statistics were computed. In total, 801,636 mammograms were undertaken in 231,824 women. Over 10 years, cumulative risk of recall was 13.3 % (95 % CI 12.7-13.8) for those screened biennially, and 19.9 % (CI 16.6-23.2) for those screened annually from age 50-51. Cumulative risk of complex false positive involving a biopsy was 3.1 % (CI 2.9-3.4) and 5.0 % (CI 3.4-6.6), respectively. From age 40-41, the risk of recall was 15.1 % (CI 14.3-16.0) and 22.5 % (CI 17.9-27.1) for biennial and annual screening, respectively. Corresponding rates of complex false positive were 3.3 % (CI 2.9-3.8) and 6.3 % (CI 3.4-9.1). Over 10 mammograms, invasive cancer was detected in 3.4 % (CI 3.3-3.5) and ductal carcinoma in situ in 0.7 % (CI 0.6-0.7) of women, with a non-significant trend toward a larger proportion of Tis and T1N0 cancers in women screened annually (74.5 %) versus biennially (70.1 %), χ (2) = 2.77, p = 0.10. Cancer detection was comparable to international findings. Recall risk was equal to European estimates for women screening from 50 and lower for screening from 40. Recall risk was half of United States' rates across start age and rescreening interval categories. Future benefit/harm balance sheets may be useful for communicating these findings to women.
Optimization of single photon detection model based on GM-APD
NASA Astrophysics Data System (ADS)
Chen, Yu; Yang, Yi; Hao, Peiyu
2017-11-01
One hundred kilometers high precision laser ranging hopes the detector has very strong detection ability for very weak light. At present, Geiger-Mode of Avalanche Photodiode has more use. It has high sensitivity and high photoelectric conversion efficiency. Selecting and designing the detector parameters according to the system index is of great importance to the improvement of photon detection efficiency. Design optimization requires a good model. In this paper, we research the existing Poisson distribution model, and consider the important detector parameters of dark count rate, dead time, quantum efficiency and so on. We improve the optimization of detection model, select the appropriate parameters to achieve optimal photon detection efficiency. The simulation is carried out by using Matlab and compared with the actual test results. The rationality of the model is verified. It has certain reference value in engineering applications.
Constant-Envelope Waveform Design for Optimal Target-Detection and Autocorrelation Performances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
2013-01-01
We propose an algorithm to directly synthesize in time-domain a constant-envelope transmit waveform that achieves the optimal performance in detecting an extended target in the presence of signal-dependent interference. This approach is in contrast to the traditional indirect methods that synthesize the transmit signal following the computation of the optimal energy spectral density. Additionally, we aim to maintain a good autocorrelation property of the designed signal. Therefore, our waveform design technique solves a bi-objective optimization problem in order to simultaneously improve the detection and autocorrelation performances, which are in general conflicting in nature. We demonstrate this compromising characteristics of themore » detection and autocorrelation performances with numerical examples. Furthermore, in the absence of the autocorrelation criterion, our designed signal is shown to achieve a near-optimum detection performance.« less
Lowry, Kathryn P.; Lee, Janie M.; Kong, Chung Y.; McMahon, Pamela M.; Gilmore, Michael E.; Cott Chubiz, Jessica E.; Pisano, Etta D.; Gatsonis, Constantine; Ryan, Paula D.; Ozanne, Elissa M.; Gazelle, G. Scott
2011-01-01
Background While breast cancer screening with mammography and MRI is recommended for BRCA mutation carriers, there is no current consensus on the optimal screening regimen. Methods We used a computer simulation model to compare six annual screening strategies [film mammography (FM), digital mammography (DM), FM and magnetic resonance imaging (MRI) or DM and MRI contemporaneously, and alternating FM/MRI or DM/MRI at six-month intervals] beginning at ages 25, 30, 35, and 40, and two strategies of annual MRI with delayed alternating DM/FM to clinical surveillance alone. Strategies were evaluated without and with mammography-induced breast cancer risk, using two models of excess relative risk. Input parameters were obtained from the medical literature, publicly available databases, and calibration. Results Without radiation risk effects, alternating DM/MRI starting at age 25 provided the highest life expectancy (BRCA1: 72.52 years, BRCA2: 77.63 years). When radiation risk was included, a small proportion of diagnosed cancers were attributable to radiation exposure (BRCA1: <2%, BRCA2: <4%). With radiation risk, alternating DM/MRI at age 25 or annual MRI at age 25/delayed alternating DM at age 30 were most effective, depending on the radiation risk model used. Alternating DM/MRI starting at age 25 also had the highest number of false-positive screens/person (BRCA1: 4.5, BRCA2: 8.1). Conclusions Annual MRI at 25/delayed alternating DM at age 30 is likely the most effective screening strategy in BRCA mutation carriers. Screening benefits, associated risks and personal acceptance of false-positive results, should be considered in choosing the optimal screening strategy for individual women. PMID:21935911
Zhou, Hui Jun; Dan, Yock Young; Naidoo, Nasheen; Li, Shu Chuen; Yeoh, Khay Guan
2013-01-01
Gastric cancer (GC) surveillance based on oesophagogastroduodenoscopy (OGD) appears to be a promising strategy for GC prevention. By evaluating the cost-effectiveness of endoscopic surveillance in Singaporean Chinese, this study aimed to inform the implementation of such a program in a population with a low to intermediate GC risk. USING A REFERENCE STRATEGY OF NO OGD INTERVENTION, WE EVALUATED FOUR STRATEGIES: 2-yearly OGD surveillance, annual OGD surveillance, 2-yearly OGD screening and 2-yearly screening plus annual surveillance in Singaporean Chinese aged 50-69 years. From a perspective of the healthcare system, Markov models were built to simulate the life experience of the target population. The models projected discounted lifetime costs ($), quality adjusted life year (QALY), and incremental cost-effectiveness ratio (ICER) indicating the cost-effectiveness of each strategy against a Singapore willingness-to-pay of $46,200/QALY. Deterministic and probabilistic sensitivity analyses were used to identify the influential variables and their associated thresholds, and to quantify the influence of parameter uncertainties respectively. With an ICER of $44,098/QALY, the annual OGD surveillance was the optimal strategy while the 2-yearly surveillance was the most cost-effective strategy (ICER = $25,949/QALY). The screening-based strategies were either extendedly dominated or cost-ineffective. The cost-effectiveness heterogeneity of the four strategies was observed across age-gender subgroups. Eight influential parameters were identified each with their specific thresholds to define the choice of optimal strategy. Accounting for the model uncertainties, the probability that the annual surveillance is the optimal strategy in Singapore was 44.5%. Endoscopic surveillance is potentially cost-effective in the prevention of GC for populations at low to intermediate risk. Regarding program implementation, a detailed analysis of influential factors and their associated thresholds is necessary. Multiple strategies should be considered in order to recommend the right strategy for the right population.
NASA Astrophysics Data System (ADS)
Chen, Duan; Chen, Qiuwen; Li, Ruonan; Blanckaert, Koen; Cai, Desuo
2014-06-01
Ecologically-friendly reservoir operation procedures aim to conserve key ecosystem properties in the rivers, while minimizing the sacrifice of socioeconomic interests. This study focused on the Jinping cascaded reservoirs as a case study. An optimization model was developed to explore a balance between the ecological flow requirement (EFR) of a target fish species ( Schizothorax chongi) in the dewatered natural channel section, and annual power production. The EFR for the channel was determined by the Tennant method and a fish habitat model, respectively. The optimization model was solved by using an adaptive real-coded genetic algorithm. Several operation scenarios corresponding to the ecological flow series were evaluated using the optimization model. Through comparisons, an optimal operational scheme, which combines relatively low power production loss with a preferred ecological flow regime in the dewatered channel, is proposed for the cascaded reservoirs. Under the recommended scheme, the discharge into the Dahewan river reach in the dry season ranges from 36 to 50 m3/s. This will enable at least 50% of the target fish habitats in the channel to be conserved, at a cost of only 2.5% annual power production loss. The study demonstrates that the use of EFRs is an efficient approach to the optimization of reservoir operation in an ecologically friendly way. Similar modeling, for other important fish species and ecosystem functions, supplemented by field validation of results, is needed in order to secure the long-term conservation of the affected river ecosystem.
Optimization of optical systems.
Champagne, E B
1966-11-01
The power signal-to-noise ratios for coherent and noncoherent optical detection are presented, with the expression for noncoherent detection being examined in detail. It is found that for the long range optical system to compete with its microwave counterpart it is necessary to optimize the optical system. The optical system may be optimized by using coherent detection, or noncoherent detection if the signal is the dominate noise factor. A design procedure is presented which, in principle, always allows one to obtain signal shot-noise limited operation with noncoherent detection if pulsed operation is used. The technique should make reasonable extremely long range, high data rate systems of relatively simple design.
USASOC Injury Prevention/Performance Optimization Musculoskeletal Screening Initiative
2016-10-29
Initiative " PRINCIPAL INVESTIGATOR: Kim Beals RECIPIENT: Dr. Christie Vu REPORT DATE: October 2016 TYPE OF REPORT: Annual PREPARED FOR: U.S...Injury Prevention/Performance Optimization Musculoskeletal Screening Initiative 5a. CONTRACT NUMBER W81XWH-15-C-0179 " 5b. GRANT NUMBER 5c... initiate work on the Phase 3 and 4 research aims b) IRB & DoD Regulatory Approvals i) University of Pittsburgh IRB approved May 23, 2016 ii) HRPO USAMRMC
How certain is desiccation in west African Sahel rainfall (1930-1990)?
NASA Astrophysics Data System (ADS)
Chappell, Adrian; Agnew, Clive T.
2008-04-01
Hypotheses for the late 1960s to 1990 period of desiccation (secular decrease in rainfall) in the west African Sahel (WAS) are typically tested by comparing empirical evidence or model predictions against "observations" of Sahelian rainfall. The outcomes of those comparisons can have considerable influence on the understanding of regional and global environmental systems. Inverse-distance squared area-weighted (IDW) estimates of WAS rainfall observations are commonly aggregated over space to provide temporal patterns without uncertainty. Spatial uncertainty of WAS rainfall was determined using the median approximation sequential indicator simulation. Every year (1930-1990) 300 equally probable realizations of annual summer rainfall were produced to honor station observations, match percentiles of the observed cumulative distributions and indicator variograms and perform adequately during cross validation. More than 49% of the IDW mean annual rainfall fell outside the 5th and 95th percentiles for annual rainfall realization means. The IDW means represented an extreme realization. Uncertainty in desiccation was determined by repeatedly (100,000) sampling the annual distribution of rainfall realization means and by applying Mann-Kendall nonparametric slope detection and significance testing. All of the negative gradients for the entire period were statistically significant. None of the negative gradients for the expected desiccation period were statistically significant. The results support the presence of a long-term decline in annual rainfall but demonstrate that short-term desiccation (1965-1990) cannot be detected. Estimates of uncertainty for precipitation and other climate variables in this or other regions, or across the globe, are essential for the rigorous detection of spatial patterns and time series trends.
Galievsky, Victor A; Stasheuski, Alexander S; Krylov, Sergey N
2017-10-17
The limit-of-detection (LOD) in analytical instruments with fluorescence detection can be improved by reducing noise of optical background. Efficiently reducing optical background noise in systems with spectrally nonuniform background requires complex optimization of an emission filter-the main element of spectral filtration. Here, we introduce a filter-optimization method, which utilizes an expression for the signal-to-noise ratio (SNR) as a function of (i) all noise components (dark, shot, and flicker), (ii) emission spectrum of the analyte, (iii) emission spectrum of the optical background, and (iv) transmittance spectrum of the emission filter. In essence, the noise components and the emission spectra are determined experimentally and substituted into the expression. This leaves a single variable-the transmittance spectrum of the filter-which is optimized numerically by maximizing SNR. Maximizing SNR provides an accurate way of filter optimization, while a previously used approach based on maximizing a signal-to-background ratio (SBR) is the approximation that can lead to much poorer LOD specifically in detection of fluorescently labeled biomolecules. The proposed filter-optimization method will be an indispensable tool for developing new and improving existing fluorescence-detection systems aiming at ultimately low LOD.
NASA Astrophysics Data System (ADS)
Zhang, Fuxian; Zhang, Mingjun; Wang, Shengjie; Qiang, Fang; Che, Yanjun; Wang, Jie
2017-08-01
As a pivotal section of the Silk Road in northwest China, the Hexi Corridor is a popular tourist destination. In this study, the tourism climate conditions in this region were discussed using the Physiologically Equivalent Temperature (PET) and the Climate-Tourism/Transfer-information-Scheme (CTIS) from 1980 to 2012. Overall, cold or cool stress was prevalent in the area, and the optimal travel period was from May to September. With global warming, the annual numbers of cumulative days with relatively cold conditions decreased, and the annual numbers of cumulative days with comfortable and relatively hot conditions increased. Two typical stations, Wushaoling and Dunhuang, were compared and analysed for their tourism climate information according to the frequency of PET and CTIS conditions, respectively. In addition, regional variations in the tourism climate conditions based on geographic information systems (GIS) were investigated during the optimal travel period.
22nd Annual Logistics Conference and Exhibition
2006-04-20
Prognostics & Health Management at GE Dr. Piero P.Bonissone Industrial AI Lab GE Global Research NCD Select detection model Anomaly detection results...Mode 213 x Failure mode histogram 2130014 Anomaly detection from event-log data Anomaly detection from event-log data Diagnostics/ Prognostics Using...Failure Monitoring & AssessmentTactical C4ISR Sense Respond 7 •Diagnostics, Prognostics and health management
Time vs. Money: A Quantitative Evaluation of Monitoring Frequency vs. Monitoring Duration.
McHugh, Thomas E; Kulkarni, Poonam R; Newell, Charles J
2016-09-01
The National Research Council has estimated that over 126,000 contaminated groundwater sites are unlikely to achieve low ug/L clean-up goals in the foreseeable future. At these sites, cost-effective, long-term monitoring schemes are needed in order to understand the long-term changes in contaminant concentrations. Current monitoring optimization schemes rely on site-specific evaluations to optimize groundwater monitoring frequency. However, when using linear regression to estimate the long-term zero-order or first-order contaminant attenuation rate, the effect of monitoring frequency and monitoring duration on the accuracy and confidence for the estimated attenuation rate is not site-specific. For a fixed number of monitoring events, doubling the time between monitoring events (e.g., changing from quarterly monitoring to semi-annual monitoring) will double the accuracy of estimated attenuation rate. For a fixed monitoring frequency (e.g., semi-annual monitoring), increasing the number of monitoring events by 60% will double the accuracy of the estimated attenuation rate. Combining these two factors, doubling the time between monitoring events (e.g., quarterly monitoring to semi-annual monitoring) while decreasing the total number of monitoring events by 38% will result in no change in the accuracy of the estimated attenuation rate. However, the time required to collect this dataset will increase by 25%. Understanding that the trade-off between monitoring frequency and monitoring duration is not site-specific should simplify the process of optimizing groundwater monitoring frequency at contaminated groundwater sites. © 2016 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.
Spectral anomaly methods for aerial detection using KUT nuisance rejection
NASA Astrophysics Data System (ADS)
Detwiler, R. S.; Pfund, D. M.; Myjak, M. J.; Kulisek, J. A.; Seifert, C. E.
2015-06-01
This work discusses the application and optimization of a spectral anomaly method for the real-time detection of gamma radiation sources from an aerial helicopter platform. Aerial detection presents several key challenges over ground-based detection. For one, larger and more rapid background fluctuations are typical due to higher speeds, larger field of view, and geographically induced background changes. As well, the possible large altitude or stand-off distance variations cause significant steps in background count rate as well as spectral changes due to increased gamma-ray scatter with detection at higher altitudes. The work here details the adaptation and optimization of the PNNL-developed algorithm Nuisance-Rejecting Spectral Comparison Ratios for Anomaly Detection (NSCRAD), a spectral anomaly method previously developed for ground-based applications, for an aerial platform. The algorithm has been optimized for two multi-detector systems; a NaI(Tl)-detector-based system and a CsI detector array. The optimization here details the adaptation of the spectral windows for a particular set of target sources to aerial detection and the tailoring for the specific detectors. As well, the methodology and results for background rejection methods optimized for the aerial gamma-ray detection using Potassium, Uranium and Thorium (KUT) nuisance rejection are shown. Results indicate that use of a realistic KUT nuisance rejection may eliminate metric rises due to background magnitude and spectral steps encountered in aerial detection due to altitude changes and geographically induced steps such as at land-water interfaces.
Research on intrusion detection based on Kohonen network and support vector machine
NASA Astrophysics Data System (ADS)
Shuai, Chunyan; Yang, Hengcheng; Gong, Zeweiyi
2018-05-01
In view of the problem of low detection accuracy and the long detection time of support vector machine, which directly applied to the network intrusion detection system. Optimization of SVM parameters can greatly improve the detection accuracy, but it can not be applied to high-speed network because of the long detection time. a method based on Kohonen neural network feature selection is proposed to reduce the optimization time of support vector machine parameters. Firstly, this paper is to calculate the weights of the KDD99 network intrusion data by Kohonen network and select feature by weight. Then, after the feature selection is completed, genetic algorithm (GA) and grid search method are used for parameter optimization to find the appropriate parameters and classify them by support vector machines. By comparing experiments, it is concluded that feature selection can reduce the time of parameter optimization, which has little influence on the accuracy of classification. The experiments suggest that the support vector machine can be used in the network intrusion detection system and reduce the missing rate.
CNV detection method optimized for high-resolution arrayCGH by normality test.
Ahn, Jaegyoon; Yoon, Youngmi; Park, Chihyun; Park, Sanghyun
2012-04-01
High-resolution arrayCGH platform makes it possible to detect small gains and losses which previously could not be measured. However, current CNV detection tools fitted to early low-resolution data are not applicable to larger high-resolution data. When CNV detection tools are applied to high-resolution data, they suffer from high false-positives, which increases validation cost. Existing CNV detection tools also require optimal parameter values. In most cases, obtaining these values is a difficult task. This study developed a CNV detection algorithm that is optimized for high-resolution arrayCGH data. This tool operates up to 1500 times faster than existing tools on a high-resolution arrayCGH of whole human chromosomes which has 42 million probes whose average length is 50 bases, while preserving false positive/negative rates. The algorithm also uses a normality test, thereby removing the need for optimal parameters. To our knowledge, this is the first formulation for CNV detecting problems that results in a near-linear empirical overall complexity for real high-resolution data. Copyright © 2012 Elsevier Ltd. All rights reserved.
2010-11-01
pected target motion. Along this line, Wettergren [5] analyzed the performance of the track - before - detect schemes for the sensor networks. Furthermore...dressed by Baumgartner and Ferrari [11] for the reorganization of the sensor field to achieve the maximum coverage. The track - before - detect -based optimal...confirming a target. In accordance with the track - before - detect paradigm [4], a moving target is detected if the kd (typically kd = 3 or 4) sensors detect
Optimal Dredge Fleet Scheduling, Phase 2
DOT National Transportation Integrated Search
2017-12-21
The U.S. Army Corps of Engineers (USACE) annually spends more than 100 million dollars on dredging hundreds of navigation projects on more than 12, 000 miles of inland and intra-coastal waterways. This project expands on a recently developed constrai...
40 CFR 63.428 - Reporting and recordkeeping.
Code of Federal Regulations, 2010 CFR
2010-07-01
...; pressure or vacuum change, mm of water; time period of test; number of leaks found with instrument; and... facility as follows: (1) Annual certification testing performed under § 63.425(e) and railcar bubble leak...)); Annual Certification Test—Internal Vapor Valve (§ 63.425(e)(2)); Leak Detection Test (§ 63.425(f...
Semi-Annual Report to Congress: April 1, 1980-September 30, 1980.
ERIC Educational Resources Information Center
Office of Inspector General (ED), Washington, DC.
EStablished in 1980 to help improve management effectiveness, the U.S. Department of Education's Office of Inspector General (OIG) is responsible for audit, investigative, fraud detection and prevention, and some security services for the Department. In this document--its first semi-annual report--the OIG first describes its organizational…
40 CFR 63.11224 - What are my monitoring, installation, operation, and maintenance requirements?
Code of Federal Regulations, 2014 CFR
2014-07-01
... include a daily calibration drift assessment, a quarterly performance audit, and an annual zero alignment... performance audit, or an annual zero alignment audit. (7) You must calculate and record 6-minute averages from... absolute particulate matter loadings. (5) The bag leak detection system must be equipped with a device to...
40 CFR 63.11224 - What are my monitoring, installation, operation, and maintenance requirements?
Code of Federal Regulations, 2013 CFR
2013-07-01
... include a daily calibration drift assessment, a quarterly performance audit, and an annual zero alignment... performance audit, or an annual zero alignment audit. (7) You must calculate and record 6-minute averages from... absolute particulate matter loadings. (5) The bag leak detection system must be equipped with a device to...
Optimization of Contrast Detection Power with Probabilistic Behavioral Information
Cordes, Dietmar; Herzmann, Grit; Nandy, Rajesh; Curran, Tim
2012-01-01
Recent progress in the experimental design for event-related fMRI experiments made it possible to find the optimal stimulus sequence for maximum contrast detection power using a genetic algorithm. In this study, a novel algorithm is proposed for optimization of contrast detection power by including probabilistic behavioral information, based on pilot data, in the genetic algorithm. As a particular application, a recognition memory task is studied and the design matrix optimized for contrasts involving the familiarity of individual items (pictures of objects) and the recollection of qualitative information associated with the items (left/right orientation). Optimization of contrast efficiency is a complicated issue whenever subjects’ responses are not deterministic but probabilistic. Contrast efficiencies are not predictable unless behavioral responses are included in the design optimization. However, available software for design optimization does not include options for probabilistic behavioral constraints. If the anticipated behavioral responses are included in the optimization algorithm, the design is optimal for the assumed behavioral responses, and the resulting contrast efficiency is greater than what either a block design or a random design can achieve. Furthermore, improvements of contrast detection power depend strongly on the behavioral probabilities, the perceived randomness, and the contrast of interest. The present genetic algorithm can be applied to any case in which fMRI contrasts are dependent on probabilistic responses that can be estimated from pilot data. PMID:22326984
Zhou, Yong-ming; Chen, Xiu-hua; Xu, Wen; Jin, Hui-ming; Li, Chao-qun; Liang, Wei-li; Wang, Duo-chun; Yan, Mei-ying; Lou, Jing; Kan, Biao; Ran, Lu; Cui, Zhi-gang; Wang, Shu-kun; Xu, Xue-bin
2013-11-01
To evaluated the fundamental role of stage control technology (SCT) on the detectability for Salmonella networking laboratories. Appropriate Salmonella detection methods after key point control being evaluated, were establishment and optimized. Our training and evaluation networking laboratories participated in the World Health Organization-Global Salmonella Surveillance Project (WHO-GSS) and China-U.S. Collaborative Program on Emerging and Re-emerging infectious diseases Project (GFN) in Shanghai. Staff members from the Yunnan Yuxi city Center for Disease Control and Prevention were trained on Salmonella isolation from diarrhea specimens. Data on annual Salmonella positive rates was collected from the provincial-level monitoring sites to be part of the GSS and GFN projects from 2006 to 2012. The methodology was designed based on the conventional detection procedure of Salmonella which involved the processes as enrichment, isolation, species identification and sero-typing. These methods were simultaneously used to satisfy the sensitivity requirements on non-typhoid Salmonella detection for networking laboratories. Public Health Laboratories in Shanghai had developed from 5 in 2006 to 9 in 2011, and Clinical laboratories from 8 to 22. Number of clinical isolates, including typhoid and non-typhoid Salmonella increased from 196 in 2006 to 1442 in 2011. The positive rate of Salmonella isolated from the clinical diarrhea cases was 2.4% in Yuxi county, in 2012. At present, three other provincial monitoring sites were using the SBG technique as selectivity enrichment broth for Salmonella isolation, with Shanghai having the most stable positive baseline. The method of SCT was proved the premise of the network laboratory construction. Based on this, the improvement of precise phenotypic identification and molecular typing capabilities could reach the level equivalent to the national networking laboratory.
Smith, Rebecca L.; Schukken, Ynte H.; Lu, Zhao; Mitchell, Rebecca M.; Grohn, Yrjo T.
2013-01-01
Objective To develop a mathematical model to simulate infection dynamics of Mycobacterium bovis in cattle herds in the United States and predict efficacy of the current national control strategy for tuberculosis in cattle. Design Stochastic simulation model. Sample Theoretical cattle herds in the United States. Procedures A model of within-herd M bovis transmission dynamics following introduction of 1 latently infected cow was developed. Frequency- and density-dependent transmission modes and 3 tuberculin-test based culling strategies (no test-based culling, constant (annual) testing with test-based culling, and the current strategy of slaughterhouse detection-based testing and culling) were investigated. Results were evaluated for 3 herd sizes over a 10-year period and validated via simulation of known outbreaks of M bovis infection. Results On the basis of 1,000 simulations (1000 herds each) at replacement rates typical for dairy cattle (0.33/y), median time to detection of M bovis infection in medium-sized herds (276 adult cattle) via slaughterhouse surveillance was 27 months after introduction, and 58% of these herds would spontaneously clear the infection prior to that time. Sixty-two percent of medium-sized herds without intervention and 99% of those managed with constant test-based culling were predicted to clear infection < 10 years after introduction. The model predicted observed outbreaks best for frequency-dependent transmission, and probability of clearance was most sensitive to replacement rate. Conclusions and Clinical Relevance Although modeling indicated the current national control strategy was sufficient for elimination of M bovis infection from dairy herds after detection, slaughterhouse surveillance was not sufficient to detect M bovis infection in all herds and resulted in subjectively delayed detection, compared with the constant testing method. Further research is required to economically optimize this strategy. PMID:23865885
Lutz, David A; Burakowski, Elizabeth A; Murphy, Mackenzie B; Borsuk, Mark E; Niemiec, Rebecca M; Howarth, Richard B
2016-01-01
Forests are more frequently being managed to store and sequester carbon for the purposes of climate change mitigation. Generally, this practice involves long-term conservation of intact mature forests and/or reductions in the frequency and intensity of timber harvests. However, incorporating the influence of forest surface albedo often suggests that long rotation lengths may not always be optimal in mitigating climate change in forests characterized by frequent snowfall. To address this, we investigated trade-offs between three ecosystem services: carbon storage, albedo-related radiative forcing, and timber provisioning. We calculated optimal rotation length at 498 diverse Forest Inventory and Analysis forest sites in the state of New Hampshire, USA. We found that the mean optimal rotation lengths across all sites was 94 yr (standard deviation of sample means = 44 yr), with a large cluster of short optimal rotation lengths that were calculated at high elevations in the White Mountain National Forest. Using a regression tree approach, we found that timber growth, annual storage of carbon, and the difference between annual albedo in mature forest vs. a post-harvest landscape were the most important variables that influenced optimal rotation. Additionally, we found that the choice of a baseline albedo value for each site significantly altered the optimal rotation lengths across all sites, lowering the mean rotation to 59 yr with a high albedo baseline, and increasing the mean rotation to 112 yr given a low albedo baseline. Given these results, we suggest that utilizing temperate forests in New Hampshire for climate mitigation purposes through carbon storage and the cessation of harvest is appropriate at a site-dependent level that varies significantly across the state.
Anthun, Kjartan Sarheim; Kittelsen, Sverre Andreas Campbell; Magnussen, Jon
2017-04-01
This paper analyses productivity growth in the Norwegian hospital sector over a period of 16 years, 1999-2014. This period was characterized by a large ownership reform with subsequent hospital reorganizations and mergers. We describe how technological change, technical productivity, scale efficiency and the estimated optimal size of hospitals have evolved during this period. Hospital admissions were grouped into diagnosis-related groups using a fixed-grouper logic. Four composite outputs were defined and inputs were measured as operating costs. Productivity and efficiency were estimated with bootstrapped data envelopment analyses. Mean productivity increased by 24.6% points from 1999 to 2014, an average annual change of 1.5%. There was a substantial growth in productivity and hospital size following the ownership reform. After the reform (2003-2014), average annual growth was <0.5%. There was no evidence of technical change. Estimated optimal size was smaller than the actual size of most hospitals, yet scale efficiency was high even after hospital mergers. However, the later hospital mergers have not been followed by similar productivity growth as around time of the reform. This study addresses the issues of both cross-sectional and longitudinal comparability of case mix between hospitals, and thus provides a framework for future studies. The study adds to the discussion on optimal hospital size. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Sone, Akihito; Shimakage, Toyonari; Suzuoki, Yasuo
A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on the demonstrative studies on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization and daily operation evaluated with the cost. The main results are as follows. The required capacity of NaS battery must be increased by 10-40% against the ideal situation without the forecast error of PVS power output. The influence of forecast error on the received grid electricity would not be so significant on annual basis because the positive and negative forecast error varies with days. The annual total cost of facility and operation increases by 2-7% due to the forecast error applied in this study. The impact of forecast error on the facility optimization and operation optimization is almost the same each other at a few percentages, implying that the forecast accuracy should be improved in terms of both the number of times with large forecast error and the average error.
Rutten, C J; Steeneveld, W; Inchaisri, C; Hogeveen, H
2014-11-01
The technical performance of activity meters for automated detection of estrus in dairy farming has been studied, and such meters are already used in practice. However, information on the economic consequences of using activity meters is lacking. The current study analyzes the economic benefits of a sensor system for detection of estrus and appraises the feasibility of an investment in such a system. A stochastic dynamic simulation model was used to simulate reproductive performance of a dairy herd. The number of cow places in this herd was fixed at 130. The model started with 130 randomly drawn cows (in a Monte Carlo process) and simulated calvings and replacement of these cows in subsequent years. Default herd characteristics were a conception rate of 50%, an 8-wk dry-off period, and an average milk production level of 8,310 kg per cow per 305 d. Model inputs were derived from real farm data and expertise. For the analysis, visual detection by the farmer ("without" situation) was compared with automated detection with activity meters ("with" situation). For visual estrus detection, an estrus detection rate of 50% and a specificity of 100% were assumed. For automated estrus detection, an estrus detection rate of 80% and a specificity of 95% were assumed. The results of the cow simulation model were used to estimate the difference between the annual net cash flows in the "with" and "without" situations (marginal financial effect) and the internal rate of return (IRR) as profitability indicators. The use of activity meters led to improved estrus detection and, therefore, to a decrease in the average calving interval and subsequent increase in annual milk production. For visual estrus detection, the average calving interval was 419 d and average annual milk production was 1,032,278 kg. For activity meters, the average calving interval was 403 d and the average annual milk production was 1,043,398 kg. It was estimated that the initial investment in activity meters would cost €17,728 for a herd of 130 cows, with an additional cost of €90 per year for the replacement of malfunctioning activity meters. Changes in annual net cash flows arising from using an activity meter included extra revenues from increased milk production and number of calves sold, increased costs from more inseminations, calvings, and feed consumption, and reduced costs from fewer culled cows and less labor for estrus detection. These changes in cash flows were caused mainly by changes in the technical results of the simulated dairy herds, which arose from differences in the estrus detection rate and specificity between the "with" and "without" situations. The average marginal financial effect in the "with" and "without" situations was €2,827 for the baseline scenario, with an average IRR of 11%. The IRR is a measure of the return on invested capital. Investment in activity meters was generally profitable. The most influential assumptions on the profitability of this investment were the assumed culling rules and the increase in sensitivity of estrus detection between the "without" and the "with" situation. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias
2016-11-01
Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS 2 experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10 5 in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.
Dynamic Grover search: applications in recommendation systems and optimization problems
NASA Astrophysics Data System (ADS)
Chakrabarty, Indranil; Khan, Shahzor; Singh, Vanshdeep
2017-06-01
In the recent years, we have seen that Grover search algorithm (Proceedings, 28th annual ACM symposium on the theory of computing, pp. 212-219, 1996) by using quantum parallelism has revolutionized the field of solving huge class of NP problems in comparisons to classical systems. In this work, we explore the idea of extending Grover search algorithm to approximate algorithms. Here we try to analyze the applicability of Grover search to process an unstructured database with a dynamic selection function in contrast to the static selection function used in the original work (Grover in Proceedings, 28th annual ACM symposium on the theory of computing, pp. 212-219, 1996). We show that this alteration facilitates us to extend the application of Grover search to the field of randomized search algorithms. Further, we use the dynamic Grover search algorithm to define the goals for a recommendation system based on which we propose a recommendation algorithm which uses binomial similarity distribution space giving us a quadratic speedup over traditional classical unstructured recommendation systems. Finally, we see how dynamic Grover search can be used to tackle a wide range of optimization problems where we improve complexity over existing optimization algorithms.
Adewumi, Aderemi Oluyinka; Chetty, Sivashan
2017-01-01
The Annual Crop Planning (ACP) problem was a recently introduced problem in the literature. This study further expounds on this problem by presenting a new mathematical formulation, which is based on market economic factors. To determine solutions, a new local search metaheuristic algorithm is investigated which is called the enhanced Best Performance Algorithm (eBPA). eBPA's results are compared against two well-known local search metaheuristic algorithms; these include Tabu Search and Simulated Annealing. The results show the potential of the eBPA for continuous optimization problems.
Optimization of municipal pressure pumping station layout and sewage pipe network design
NASA Astrophysics Data System (ADS)
Tian, Jiandong; Cheng, Jilin; Gong, Yi
2018-03-01
Accelerated urbanization places extraordinary demands on sewer networks; thus optimization research to improve the design of these systems has practical significance. In this article, a subsystem nonlinear programming model is developed to optimize pumping station layout and sewage pipe network design. The subsystem model is expanded into a large-scale complex nonlinear programming system model to find the minimum total annual cost of the pumping station and network of all pipe segments. A comparative analysis is conducted using the sewage network in Taizhou City, China, as an example. The proposed method demonstrated that significant cost savings could have been realized if the studied system had been optimized using the techniques described in this article. Therefore, the method has practical value for optimizing urban sewage projects and provides a reference for theoretical research on optimization of urban drainage pumping station layouts.
NASA Astrophysics Data System (ADS)
Xu, Min; Kang, Shichang; Wu, Hao; Yuan, Xu
2018-05-01
As abundant distribution of glaciers and snow, the Tianshan Mountains are highly vulnerable to changes in climate. Based on meteorological station records during 1960-2016, we detected the variations of air temperature and precipitation by using non-parametric method in the different sub-regions and different elevations of the Tianshan Mountains. The mutations of climate were investigated by Mann-Kendall abrupt change test in the sub-regions. The periodicity is examined by wavelet analysis employing a chi-square test and detecting significant time sections. The results show that the Tianshan Mountains experienced an overall rapid warming and wetting during study period, with average warming rate of 0.32 °C/10a and wet rate of 5.82 mm/10a, respectively. The annual and seasonal spatial variation of temperature showed different scales in different regions. The annual precipitation showed non-significant upward trend in 20 stations, and 6 stations showed a significant upward trend. The temperatures in the East Tianshan increased most rapidly at rates of 0.41 °C/10a. The increasing magnitudes of annual precipitation were highest in the Boertala Vally (8.07 mm/10a) and lowest in the East Tianshan (2.64 mm/10a). The greatest and weakest warming was below 500 m (0.42 °C/10a) and elevation of 1000-1500 m (0.23 °C/10a), respectively. The increasing magnitudes of annual precipitation were highest in the elevation of 1500 m-2000 m (9.22 mm/10a) and lowest in the elevation of below 500 m (3.45 mm/10a). The mutations of annual air temperature and precipitation occurred in 1995 and 1990, respectively. The large atmospheric circulation influenced on the mutations of climate. The significant periods of air temperature were 2.4-4.1 years, and annual precipitation was 2.5-7.4 years. Elevation dependency of temperature trend magnitude was not evidently in the Tianshan Mountains. The annual precipitation wetting trend was amplified with elevation in summer and autumn. The strong elevation dependence of precipitation increasing trend appeared in summer.
Regional Patterns and Spatial Clusters of Nonstationarities in Annual Peak Instantaneous Streamflow
NASA Astrophysics Data System (ADS)
White, K. D.; Baker, B.; Mueller, C.; Villarini, G.; Foley, P.; Friedman, D.
2017-12-01
Information about hydrologic changes resulting from changes in climate, land use, and land cover is a necessity planning and design or water resources infrastructure. The United States Army Corps of Engineers (USACE) evaluated and selected 12 methods to detect abrupt and slowly varying nonstationarities in records of maximum peak annual flows. They deployed a publicly available tool[1]in 2016 and a guidance document in 2017 to support identification of nonstationarities in a reproducible manner using a robust statistical framework. This statistical framework has now been applied to streamflow records across the continental United States to explore the presence of regional patterns and spatial clusters of nonstationarities in peak annual flow. Incorporating this geographic dimension into the detection of nonstationarities provides valuable insight for the process of attribution of these significant changes. This poster summarizes the methods used and provides the results of the regional analysis. [1] Available here - http://www.corpsclimate.us/ptcih.cfm
COST MODELS FOR WATER SUPPLY DISTRIBUTION SYSTEMS
A major challenge for society in the twenty-first century will be replacement, design and optimal management of urban infrastructure. It is estimated that the current world wide demand for infrastructure investment is approximately three trillion dollars annually. A Drinking Wate...
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao
2017-01-01
The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.
Wang, Juan; Zhang, Chunyu; Gadow, Klaus V; Cheng, Yanxia; Zhao, Xiuhai
2015-06-01
Trade-off in dioecious plant. The trade-off between reproduction, vegetative growth and maintenance is a major issue in the life history of an organism and a record of the process which is producing the largest possible number of living offspring by natural selection. Dioecious species afford an excellent opportunity for detecting such possible trade-offs in resource allocation. In this study, we selected the dioecious shrub Acer barbinerve to examine possible trade-offs between reproduction and vegetative growth in both genders at different modular levels during three successive years. Reproductive and vegetative biomass values were assessed during successive years to evaluate their intra-annual and inter-annual trade-offs. These trade-offs were examined at shoot, branch and shrub modular levels in Acer barbinerve shrubs. An intra-annual trade-off was detected at the shoot level for both genders in 2011 and 2012. Both males and females showed a negative correlation between reproduction and vegetative growth, but this was more prominent in males. For the females of the species, inter-annual trade-offs were only found at branch and shrub levels. Slightly negative correlations in females were detected between the reproduction in 2012 and the reproduction in the two previous years. The gender ratio was significantly male biased during the three successive years of our investigation. Females had higher mortality rates in the larger diameter classes, both in 2011 and 2012. This study revealed a clear trade-off between reproduction and vegetative growth in Acer barbinerve, but results varied between males and females. The degree of autonomy of the different modular levels may affect the ability to detect such trade-offs.
The 2014 China meeting of the International Society for Vascular Surgery.
Dardik, Alan; Ouriel, Kenneth; Wang, JinSong; Liapis, Christos
2014-10-01
The 2014 meeting of the International Society for Vascular Surgery (ISVS) was held in Guangzhou, China, in conjunction with the fifth annual Wang Zhong-Gao's Vascular Forum, the eighth annual China Southern Endovascular Congress, and the third annual Straits Vascular Forum. Keynote addresses were given by Professors Christos Liapis, Wang Zhong-Gao, and Wang Shen-Ming. President Liapis presented the first ISVS Lifetime Achievement Award to Professor Wang Zhong-Gao for his multi-decade accomplishments establishing Vascular Surgery as a specialty in China. Faculty presentations were made in plenary sessions that focused on diseases relevant to the patterns of vascular disease prevalent in China. Thirty-one abstracts were presented by vascular surgeons from around the globe, and the top 10 presentations were recognized. Thirteen countries were represented in the meeting. The 2014 ISVS meeting was a success. Partnership of this meeting with host Chinese Vascular Surgery societies was of mutual benefit, bringing vascular surgeons of international reputation to the local area for academic and intellectual exchange and formation of collaborations; integration of the meetings allows easier logistics to facilitate meeting organization and optimization of time for both faculty and attendees. This integrated model may serve as an optimal model for future meetings. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Allainé, Dominique; Sauzet, Sandrine; Cohas, Aurélie
2016-01-01
Despite being identified an area that is poorly understood regarding the effects of climate change, behavioural responses to climatic variability are seldom explored. Climatic variability is likely to cause large inter-annual variation in the frequency of extra-pair litters produced, a widespread alternative mating tactic to help prevent, correct or minimize the negative consequences of sub-optimal mate choice. In this study, we investigated how climatic variability affects the inter-annual variation in the proportion of extra-pair litters in a wild population of Alpine marmots. During 22 years of monitoring, the annual proportion of extra-pair litters directly increased with the onset of earlier springs and indirectly with increased snow in winters. Snowier winters resulted in a higher proportion of families with sexually mature male subordinates and thus, created a social context within which extra-pair paternity was favoured. Earlier spring snowmelt could create this pattern by relaxing energetic, movement and time constraints. Further, deeper snow in winter could also contribute by increasing litter size and juvenile survival. Optimal mate choice is particularly relevant to generate adaptive genetic diversity. Understanding the influence of environmental conditions and the capacity of the individuals to cope with them is crucial within the context of rapid climate change. PMID:28003452
Bichet, Coraline; Allainé, Dominique; Sauzet, Sandrine; Cohas, Aurélie
2016-12-28
Despite being identified an area that is poorly understood regarding the effects of climate change, behavioural responses to climatic variability are seldom explored. Climatic variability is likely to cause large inter-annual variation in the frequency of extra-pair litters produced, a widespread alternative mating tactic to help prevent, correct or minimize the negative consequences of sub-optimal mate choice. In this study, we investigated how climatic variability affects the inter-annual variation in the proportion of extra-pair litters in a wild population of Alpine marmots. During 22 years of monitoring, the annual proportion of extra-pair litters directly increased with the onset of earlier springs and indirectly with increased snow in winters. Snowier winters resulted in a higher proportion of families with sexually mature male subordinates and thus, created a social context within which extra-pair paternity was favoured. Earlier spring snowmelt could create this pattern by relaxing energetic, movement and time constraints. Further, deeper snow in winter could also contribute by increasing litter size and juvenile survival. Optimal mate choice is particularly relevant to generate adaptive genetic diversity. Understanding the influence of environmental conditions and the capacity of the individuals to cope with them is crucial within the context of rapid climate change. © 2016 The Author(s).
Human influence on sub-regional surface air temperature change over India.
Dileepkumar, R; AchutaRao, Krishna; Arulalan, T
2018-06-12
Human activities have been implicated in the observed increase in Global Mean Surface Temperature. Over regional scales where climatic changes determine societal impacts and drive adaptation related decisions, detection and attribution (D&A) of climate change can be challenging due to the greater contribution of internal variability, greater uncertainty in regionally important forcings, greater errors in climate models, and larger observational uncertainty in many regions of the world. We examine the causes of annual and seasonal surface air temperature (TAS) changes over sub-regions (based on a demarcation of homogeneous temperature zones) of India using two observational datasets together with results from a multimodel archive of forced and unforced simulations. Our D&A analysis examines sensitivity of the results to a variety of optimal fingerprint methods and temporal-averaging choices. We can robustly attribute TAS changes over India between 1956-2005 to anthropogenic forcing mostly by greenhouse gases and partially offset by other anthropogenic forcings including aerosols and land use land cover change.
Prediabetes: A high-risk state for developing diabetes
Tabák, Adam G.; Herder, Christian; Rathmann, Wolfgang; Brunner, Eric J.; Kivimäki, Mika
2013-01-01
Summary Prediabetes (or “intermediate hyperglycaemia”), based on glycaemic parameters above normal but below diabetes thresholds is a high risk state for diabetes with an annualized conversion rate of 5%–10%; with similar proportion converting back to normoglycaemia. The prevalence of prediabetes is increasing worldwide and it is projected that >470 million people will have prediabetes in 2030. Prediabetes is associated with the simultaneous presence of insulin resistance and β-cell dysfunction, abnormalities that start before glucose changes are detectable. Observational evidence shows associations of prediabetes with early forms of nephropathy, chronic kidney disease, small fibre neuropathy, diabetic retinopathy, and increased risk of macrovascular disease. Multifactorial risk scores could optimize the estimation of diabetes risk using non-invasive parameters and blood-based metabolic traits in addition to glycaemic values. For prediabetic individuals, lifestyle modification is the cornerstone of diabetes prevention with evidence of a 40%–70% relative risk reduction. Accumulating data also suggests potential benefits from pharmacotherapy. PMID:22683128
NASA Technical Reports Server (NTRS)
Worrall, Diana M. (Editor); Biemesderfer, Chris (Editor); Barnes, Jeannette (Editor)
1992-01-01
Consideration is given to a definition of a distribution format for X-ray data, the Einstein on-line system, the NASA/IPAC extragalactic database, COBE astronomical databases, Cosmic Background Explorer astronomical databases, the ADAM software environment, the Groningen Image Processing System, search for a common data model for astronomical data analysis systems, deconvolution for real and synthetic apertures, pitfalls in image reconstruction, a direct method for spectral and image restoration, and a discription of a Poisson imagery super resolution algorithm. Also discussed are multivariate statistics on HI and IRAS images, a faint object classification using neural networks, a matched filter for improving SNR of radio maps, automated aperture photometry of CCD images, interactive graphics interpreter, the ROSAT extreme ultra-violet sky survey, a quantitative study of optimal extraction, an automated analysis of spectra, applications of synthetic photometry, an algorithm for extra-solar planet system detection and data reduction facilities for the William Herschel telescope.
Optimal Detection of Global Warming using Temperature Profiles
NASA Technical Reports Server (NTRS)
Leroy, Stephen S.
1997-01-01
Optimal fingerprinting is applied to estimate the amount of time it would take to detect warming by increased concentrations of carbon dioxide in monthly averages of temperature profiles over the Indian Ocean.
Model-Based Design of Tree WSNs for Decentralized Detection.
Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam
2015-08-20
The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches.
ERIC Educational Resources Information Center
Whalen, D. Joel
2015-01-01
This article, the second of a two-part series, features 11 teaching innovations presented at the 2014 Association for Business Communication annual conference. These 11 assignments included leadership and other-focused communication--detecting communication style, adaptive communication, personality type, delivering feedback, problem solving, and…
Xiong, Lihua; Jiang, Cong; Du, Tao
2014-01-01
Time-varying moments models based on Pearson Type III and normal distributions respectively are built under the generalized additive model in location, scale and shape (GAMLSS) framework to analyze the nonstationarity of the annual runoff series of the Weihe River, the largest tributary of the Yellow River. The detection of nonstationarities in hydrological time series (annual runoff, precipitation and temperature) from 1960 to 2009 is carried out using a GAMLSS model, and then the covariate analysis for the annual runoff series is implemented with GAMLSS. Finally, the attribution of each covariate to the nonstationarity of annual runoff is analyzed quantitatively. The results demonstrate that (1) obvious change-points exist in all three hydrological series, (2) precipitation, temperature and irrigated area are all significant covariates of the annual runoff series, and (3) temperature increase plays the main role in leading to the reduction of the annual runoff series in the study basin, followed by the decrease of precipitation and the increase of irrigated area.
Adedeji, A. J.; Abdu, P. A.; Luka, P. D.; Owoade, A. A.; Joannis, T. M.
2017-01-01
Aim: This study was designed to optimize and apply the use of loop-mediated isothermal amplification (LAMP) as an alternative to conventional polymerase chain reaction (PCR) for the detection of herpesvirus of turkeys (HVT) (FC 126 strain) in vaccinated and non-vaccinated poultry in Nigeria. Materials and Methods: HVT positive control (vaccine) was used for optimization of LAMP using six primers that target the HVT070 gene sequence of the virus. These primers can differentiate HVT, a Marek’s disease virus (MDV) serotype 3 from MDV serotypes 1 and 2. Samples were collected from clinical cases of Marek’s disease (MD) in chickens, processed and subjected to LAMP and PCR. Results: LAMP assay for HVT was optimized. HVT was detected in 60% (3/5) and 100% (5/5) of the samples analyzed by PCR and LAMP, respectively. HVT was detected in the feathers, liver, skin, and spleen with average DNA purity of 3.05-4.52 μg DNA/mg (A260/A280) using LAMP. Conventional PCR detected HVT in two vaccinated and one unvaccinated chicken samples, while LAMP detected HVT in two vaccinated and three unvaccinated corresponding chicken samples. However, LAMP was a faster and simpler technique to carry out than PCR. Conclusion: LAMP assay for the detection of HVT was optimized. LAMP and PCR detected HVT in clinical samples collected. LAMP assay can be a very good alternative to PCR for detection of HVT and other viruses. This is the first report of the use of LAMP for the detection of viruses of veterinary importance in Nigeria. LAMP should be optimized as a diagnostic and research tool for investigation of poultry diseases such as MD in Nigeria. PMID:29263603
High Grazing Angle Sea-Clutter Literature Review
2013-03-01
Optimal and sub-optimal detection .................................................................... 37 7.3 Polarimetry ... polarimetry for target detection from high grazing angles. UNCLASSIFIED DSTO-GD-0736 UNCLASSIFIED 36 7.1 Parametric modelling There have not been...relationships were also found to be intrinsically related to Gaussian detection counterparts. 7.3 Polarimetry Early studies by Stacy et al. [45, 46] and
Optimal detection and control strategies for invasive species management
Shefali V. Mehta; Robert G. Haight; Frances R. Homans; Stephen Polasky; Robert C. Venette
2007-01-01
The increasing economic and environmental losses caused by non-native invasive species amplify the value of identifying and implementing optimal management options to prevent, detect, and control invasive species. Previous literature has focused largely on preventing introductions of invasive species and post-detection control activities; few have addressed the role of...
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-03-31
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Detecting sulphate aerosol geoengineering with different methods
Lo, Y. T. Eunice; Charlton-Perez, Andrew J.; Lott, Fraser C.; ...
2016-12-15
Sulphate aerosol injection has been widely discussed as a possible way to engineer future climate. Monitoring it would require detecting its effects amidst internal variability and in the presence of other external forcings. Here, we investigate how the use of different detection methods and filtering techniques affects the detectability of sulphate aerosol geoengineering in annual-mean global-mean near-surface air temperature. This is done by assuming a future scenario that injects 5 Tg yr -1 of sulphur dioxide into the stratosphere and cross-comparing simulations from 5 climate models. 64% of the studied comparisons would require 25 years or more for detection whenmore » no filter and the multi-variate method that has been extensively used for attributing climate change are used, while 66% of the same comparisons would require fewer than 10 years for detection using a trend-based filter. This then highlights the high sensitivity of sulphate aerosol geoengineering detectability to the choice of filter. With the same trend-based filter but a non-stationary method, 80% of the comparisons would require fewer than 10 years for detection. This does not imply sulphate aerosol geoengineering should be deployed, but suggests that both detection methods could be used for monitoring geoengineering in global, annual mean temperature should it be needed.« less
Trajectory optimization for the national aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1993-01-01
During the past six months the research objectives outlined in the last semi-annual report were accomplished. Specifically, these are: three-dimensional (3-D) fuel-optimal ascent trajectory of the aerospace plane and the effects of thrust vectoring control (TVC) on the fuel consumption and trajectory shaping were investigated; the maximum abort landing area (footprint) was studied; preliminary assessment of simultaneous design of the ascent trajectory and the vehicle configuration for the aerospace plane was also conducted. The work accomplished in the reporting period is summarized.
Optimum dry-cooling sub-systems for a solar air conditioner
NASA Technical Reports Server (NTRS)
Chen, J. L. S.; Namkoong, D.
1978-01-01
Dry-cooling sub-systems for residential solar powered Rankine compression air conditioners were economically optimized and compared with the cost of a wet cooling tower. Results in terms of yearly incremental busbar cost due to the use of dry-cooling were presented for Philadelphia and Miami. With input data corresponding to local weather, energy rate and capital costs, condenser surface designs and performance, the computerized optimization program yields design specifications of the sub-system which has the lowest annual incremental cost.
Application of Hyperspectral Imaging to Detect Sclerotinia sclerotiorum on Oilseed Rape Stems
Kong, Wenwen; Zhang, Chu; Huang, Weihao
2018-01-01
Hyperspectral imaging covering the spectral range of 384–1034 nm combined with chemometric methods was used to detect Sclerotinia sclerotiorum (SS) on oilseed rape stems by two sample sets (60 healthy and 60 infected stems for each set). Second derivative spectra and PCA loadings were used to select the optimal wavelengths. Discriminant models were built and compared to detect SS on oilseed rape stems, including partial least squares-discriminant analysis, radial basis function neural network, support vector machine and extreme learning machine. The discriminant models using full spectra and optimal wavelengths showed good performance with classification accuracies of over 80% for the calibration and prediction set. Comparing all developed models, the optimal classification accuracies of the calibration and prediction set were over 90%. The similarity of selected optimal wavelengths also indicated the feasibility of using hyperspectral imaging to detect SS on oilseed rape stems. The results indicated that hyperspectral imaging could be used as a fast, non-destructive and reliable technique to detect plant diseases on stems. PMID:29300315
NASA Astrophysics Data System (ADS)
Wang, Hongyan
2017-04-01
This paper addresses the waveform optimization problem for improving the detection performance of multi-input multioutput (MIMO) orthogonal frequency division multiplexing (OFDM) radar-based space-time adaptive processing (STAP) in the complex environment. By maximizing the output signal-to-interference-and-noise-ratio (SINR) criterion, the waveform optimization problem for improving the detection performance of STAP, which is subjected to the constant modulus constraint, is derived. To tackle the resultant nonlinear and complicated optimization issue, a diagonal loading-based method is proposed to reformulate the issue as a semidefinite programming one; thereby, this problem can be solved very efficiently. In what follows, the optimized waveform can be obtained to maximize the output SINR of MIMO-OFDM such that the detection performance of STAP can be improved. The simulation results show that the proposed method can improve the output SINR detection performance considerably as compared with that of uncorrelated waveforms and the existing MIMO-based STAP method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anghileri, Daniela; Voisin, Nathalie; Castelletti, Andrea F.
In this study, we develop a forecast-based adaptive control framework for Oroville reservoir, California, to assess the value of seasonal and inter-annual forecasts for reservoir operation.We use an Ensemble Streamflow Prediction (ESP) approach to generate retrospective, one-year-long streamflow forecasts based on the Variable Infiltration Capacity hydrology model. The optimal sequence of daily release decisions from the reservoir is then determined by Model Predictive Control, a flexible and adaptive optimization scheme.We assess the forecast value by comparing system performance based on the ESP forecasts with that based on climatology and a perfect forecast. In addition, we evaluate system performance based onmore » a synthetic forecast, which is designed to isolate the contribution of seasonal and inter-annual forecast skill to the overall value of the ESP forecasts.Using the same ESP forecasts, we generalize our results by evaluating forecast value as a function of forecast skill, reservoir features, and demand. Our results show that perfect forecasts are valuable when the water demand is high and the reservoir is sufficiently large to allow for annual carry-over. Conversely, ESP forecast value is highest when the reservoir can shift water on a seasonal basis.On average, for the system evaluated here, the overall ESP value is 35% less than the perfect forecast value. The inter-annual component of the ESP forecast contributes 20-60% of the total forecast value. Improvements in the seasonal component of the ESP forecast would increase the overall ESP forecast value between 15 and 20%.« less
Placental alpha-microglobulin-1 and combined traditional diagnostic test: a cost-benefit analysis.
Echebiri, Nelson C; McDoom, M Maya; Pullen, Jessica A; Aalto, Meaghan M; Patel, Natasha N; Doyle, Nora M
2015-01-01
We sought to evaluate if the placental alpha-microglobulin (PAMG)-1 test vs the combined traditional diagnostic test (CTDT) of pooling, nitrazine, and ferning would be a cost-beneficial screening strategy in the setting of potential preterm premature rupture of membranes. A decision analysis model was used to estimate the economic impact of PAMG-1 test vs the CTDT on preterm delivery costs from a societal perspective. Our primary outcome was the annual net cost-benefit per person tested. Baseline probabilities and costs assumptions were derived from published literature. We conducted sensitivity analyses using both deterministic and probabilistic models. Cost estimates reflect 2013 US dollars. Annual net benefit from PAMG-1 was $20,014 per person tested, while CTDT had a net benefit of $15,757 per person tested. If the probability of rupture is <38%, PAMG-1 will be cost-beneficial with an annual net benefit of $16,000-37,000 per person tested, while CTDT will have an annual net benefit of $16,000-19,500 per person tested. If the probability of rupture is >38%, CTDT is more cost-beneficial. Monte Carlo simulations of 1 million trials selected PAMG-1 as the optimal strategy with a frequency of 89%, while CTDT was only selected as the optimal strategy with a frequency of 11%. Sensitivity analyses were robust. Our cost-benefit analysis provides the economic evidence for the adoption of PAMG-1 in diagnosing preterm premature rupture of membranes in uncertain presentations and when CTDT is equivocal at 34 to <37 weeks' gestation. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Yanlai; Guo, Shenglian; Hong, Xingjun; Chang, Fi-John
2017-10-01
China's inter-basin water transfer projects have gained increasing attention in recent years. This study proposes an intelligent water allocation methodology for establishing optimal inter-basin water allocation schemes and assessing the impacts of water transfer projects on water-demanding sectors in the Hanjiang River Basin of China. We first analyze water demands for water allocation purpose, and then search optimal water allocation strategies for maximizing the water supply to water-demanding sectors and mitigating the negative impacts by using the Standard Genetic Algorithm (SGA) and Adaptive Genetic Algorithm (AGA), respectively. Lastly, the performance indexes of the water supply system are evaluated under different scenarios of inter-basin water transfer projects. The results indicate that: the AGA with adaptive crossover and mutation operators could increase the average annual water transfer from the Hanjiang River by 0.79 billion m3 (8.8%), the average annual water transfer from the Changjiang River by 0.18 billion m3 (6.5%), and the average annual hydropower generation by 0.49 billion kW h (5.4%) as well as reduce the average annual unmet water demand by 0.40 billion m3 (9.7%), as compared with the those of the SGA. We demonstrate that the proposed intelligent water allocation schemes can significantly mitigate the negative impacts of inter-basin water transfer projects on the reliability, vulnerability and resilience of water supply to the demanding sectors in water-supplying basins. This study has a direct bearing on more intelligent and effectual water allocation management under various scenarios of inter-basin water transfer projects.
Optimal dredge fleet scheduling within environmental work windows.
DOT National Transportation Integrated Search
2016-09-15
The U.S. Army Corps of Engineers (USACE) annually spends more than 100 million dollars on dredging hundreds of navigation projects on more than 12,000 miles of inland and intra-coastal waterways. Building on previous work with USACE, this project exp...
NASA Astrophysics Data System (ADS)
Liu, Dedi; Guo, Shenglian; Shao, Quanxi; Liu, Pan; Xiong, Lihua; Wang, Le; Hong, Xingjun; Xu, Yao; Wang, Zhaoli
2018-01-01
Human activities and climate change have altered the spatial and temporal distribution of water availability which is a principal prerequisite for allocation of different water resources. In order to quantify the impacts of climate change and human activities on water availability and optimal allocation of water resources, hydrological models and optimal water resource allocation models should be integrated. Given that increasing human water demand and varying water availability conditions necessitate adaptation measures, we propose a framework to assess the effects of these measures on optimal allocation of water resources. The proposed model and framework were applied to a case study of the middle and lower reaches of the Hanjiang River Basin in China. Two representative concentration pathway (RCP) scenarios (RCP2.6 and RCP4.5) were employed to project future climate, and the Variable Infiltration Capacity (VIC) hydrological model was used to simulate the variability of flows under historical (1956-2011) and future (2012-2099) conditions. The water availability determined by simulating flow with the VIC hydrological model was used to establish the optimal water resources allocation model. The allocation results were derived under an extremely dry year (with an annual average water flow frequency of 95%), a very dry year (with an annual average water flow frequency of 90%), a dry year (with an annual average water flow frequency of 75%), and a normal year (with an annual average water flow frequency of 50%) during historical and future periods. The results show that the total available water resources in the study area and the inflow of the Danjiangkou Reservoir will increase in the future. However, the uneven distribution of water availability will cause water shortage problems, especially in the boundary areas. The effects of adaptation measures, including water saving, and dynamic control of flood limiting water levels (FLWLs) for reservoir operation, were assessed and implemented to alleviate water shortages. The negative impacts from the South-to-North Water Transfer Project (Middle Route) in the mid-lower reaches of the Hanjiang River Basin can be avoided through the dynamic control of FLWLs in Danjiangkou Reservoir, under the historical and future RCP2.6 and RCP4.5 scenarios. However, the effects of adaptation measures are limited due to their own constraints, such as the characteristics of the reservoirs influencing the FLWLs. The utilization of storm water appears necessary to meet future water demand. Overall, the results indicate that the framework for assessing the effects of adaptation measures on water resources allocation might aid water resources management, not only in the study area but also in other places where water availability conditions vary due to climate change and human activities.
Brudecki, K; Kowalska, A; Zagrodzki, P; Szczodry, A; Mroz, T; Janowski, P; Mietelski, J W
2017-03-01
This paper presents results of 131 I thyroid activity measurements in 30 members of the nuclear medicine personnel of the Department of Endocrinology and Nuclear Medicine Holy Cross Cancer Centre in Kielce, Poland. A whole-body spectrometer equipped with two semiconductor gamma radiation detectors served as the basic research instrument. In ten out of 30 examined staff members, the determined 131 I activity was found to be above the detection limit (DL = 5 Bq of 131 I in the thyroid). The measured activities ranged from (5 ± 2) Bq to (217 ± 56) Bq. The highest activities in thyroids were detected for technical and cleaning personnel, whereas the lowest values were recorded for medical doctors. Having measured the activities, an attempt has been made to estimate the corresponding annual effective doses, which were found to range from 0.02 to 0.8 mSv. The highest annual equivalent doses have been found for thyroid, ranging from 0.4 to 15.4 mSv, detected for a cleaner and a technician, respectively. The maximum estimated effective dose corresponds to 32% of the annual background dose in Poland, and to circa 4% of the annual limit for the effective dose due to occupational exposure of 20 mSv per year, which is in compliance with the value recommended by the International Commission on Radiological Protection.
Trend Change Detection in NDVI Time Series: Effects of Inter-Annual Variability and Methodology
NASA Technical Reports Server (NTRS)
Forkel, Matthias; Carvalhais, Nuno; Verbesselt, Jan; Mahecha, Miguel D.; Neigh, Christopher S.R.; Reichstein, Markus
2013-01-01
Changing trends in ecosystem productivity can be quantified using satellite observations of Normalized Difference Vegetation Index (NDVI). However, the estimation of trends from NDVI time series differs substantially depending on analyzed satellite dataset, the corresponding spatiotemporal resolution, and the applied statistical method. Here we compare the performance of a wide range of trend estimation methods and demonstrate that performance decreases with increasing inter-annual variability in the NDVI time series. Trend slope estimates based on annual aggregated time series or based on a seasonal-trend model show better performances than methods that remove the seasonal cycle of the time series. A breakpoint detection analysis reveals that an overestimation of breakpoints in NDVI trends can result in wrong or even opposite trend estimates. Based on our results, we give practical recommendations for the application of trend methods on long-term NDVI time series. Particularly, we apply and compare different methods on NDVI time series in Alaska, where both greening and browning trends have been previously observed. Here, the multi-method uncertainty of NDVI trends is quantified through the application of the different trend estimation methods. Our results indicate that greening NDVI trends in Alaska are more spatially and temporally prevalent than browning trends. We also show that detected breakpoints in NDVI trends tend to coincide with large fires. Overall, our analyses demonstrate that seasonal trend methods need to be improved against inter-annual variability to quantify changing trends in ecosystem productivity with higher accuracy.
Optimization and evaluation of a method to detect adenoviruses in river water
This dataset includes the recoveries of spiked adenovirus through various stages of experimental optimization procedures. This dataset is associated with the following publication:McMinn , B., A. Korajkic, and A. Grimm. Optimization and evaluation of a method to detect adenoviruses in river water. JOURNAL OF VIROLOGICAL METHODS. Elsevier Science Ltd, New York, NY, USA, 231(1): 8-13, (2016).
NASA Astrophysics Data System (ADS)
Kahl, Annelen; Nguyen, Viet-Anh; Bartlett, Stuart; Sossan, Fabrizio; Lehning, Michael
2016-04-01
For a successful distribution strategy of PV installations, it does not suffice to choose the locations with highest annual total irradiance. Attention needs to be given to spatial correlation patterns of insolation to avoid large system-wide variations, which can cause extended deficits in supply or might even damage the electrical network. One alternative goal instead is to seek configurations that provide the smoothest energy production, with the most reliable and predictable supply. Our work investigates several scenarios, each pursuing a different strategy for a future renewable Switzerland without nuclear power. Based on an estimate for necessary installed capacity for solar power [Bartlett, 2015] we first use heuristics to pre-select realistic placements for PV installations. Then we apply optimization methods to find a subset of locations that provides the best possible combined electricity production. For the first part of the selection process, we use a DEM to exclude high elevation zones which would be difficult to access and which are prone to natural hazards. Then we use land surface cover information to find all zones with potential roof area, deemed suitable for installation of solar panels. The optimization employs Principal Component Analysis of satellite derived irradiance data (Surface Incoming Shortwave Radiation (SIS), based on Meteosat Second Generation sensors) to incorporate a spatial aspect into the selection process that does not simply maximize annual total production but rather provides the most robust supply, by combining regions with anti-correlated cloud cover patterns. Depending on the initial assumptions and constraints, the resulting distribution schemes for PV installations vary with respect to required surface area, annual total and lowest short-term production, and illustrate how important it is to clearly define priorities and policies for a future renewable Switzerland.
Baldi, Germán; Nosetto, Marcelo D; Aragón, Roxana; Aversa, Fernando; Paruelo, José M; Jobbágy, Esteban G
2008-09-03
In the last decades, South American ecosystems underwent important functional modifications due to climate alterations and direct human intervention on land use and land cover. Among remotely sensed data sets, NOAA-AVHRR "Normalized Difference Vegetation Index" (NDVI) represents one of the most powerful tools to evaluate these changes thanks to their extended temporal coverage. In this paper we explored the possibilities and limitations of three commonly used NOAA-AVHRR NDVI series (PAL, GIMMS and FASIR) to detect ecosystem functional changes in the South American continent. We performed pixel-based linear regressions for four NDVI variables (average annual, maximum annual, minimum annual and intra-annual coefficient of variation) for the 1982-1999 period and (1) analyzed the convergences and divergences of significant multi-annual trends identified across all series, (2) explored the degree of aggregation of the trends using the O-ring statistic, and (3) evaluated observed trends using independent information on ecosystem functional changes in five focal regions. Several differences arose in terms of the patterns of change (the sign, localization and total number of pixels with changes). FASIR presented the highest proportion of changing pixels (32.7%) and GIMMS the lowest (16.2%). PAL and FASIR data sets showed the highest agreement, with a convergence of detected trends on 71.2% of the pixels. Even though positive and negative changes showed substantial spatial aggregation, important differences in the scale of aggregation emerged among the series, with GIMMS showing the smaller scale (≤11 pixels). The independent evaluations suggest higher accuracy in the detection of ecosystem changes among PAL and FASIR series than with GIMMS, as they detected trends that match expected shifts. In fact, this last series eliminated most of the long term patterns over the continent. For example, in the "Eastern Paraguay" and "Uruguay River margins" focal regions, the extensive changes due to land use and land cover change expansion were detected by PAL and FASIR, but completely ignored by GIMMS. Although the technical explanation of the differences remains unclear and needs further exploration, we found that the evaluation of this type of remote sensing tools should not only be focused at the level of assumptions (i.e. physical or mathematical aspects of image processing), but also at the level of results (i.e. contrasting observed patterns with independent proofs of change). We finally present the online collaborative initiative "Land ecosystem change utility for South America", which facilitates this type of evaluations and helps to identify the most important functional changes of the continent.
Baldi, Germán; Nosetto, Marcelo D.; Aragón, Roxana; Aversa, Fernando; Paruelo, José M.; Jobbágy, Esteban G.
2008-01-01
In the last decades, South American ecosystems underwent important functional modifications due to climate alterations and direct human intervention on land use and land cover. Among remotely sensed data sets, NOAA-AVHRR “Normalized Difference Vegetation Index” (NDVI) represents one of the most powerful tools to evaluate these changes thanks to their extended temporal coverage. In this paper we explored the possibilities and limitations of three commonly used NOAA-AVHRR NDVI series (PAL, GIMMS and FASIR) to detect ecosystem functional changes in the South American continent. We performed pixel-based linear regressions for four NDVI variables (average annual, maximum annual, minimum annual and intra-annual coefficient of variation) for the 1982-1999 period and (1) analyzed the convergences and divergences of significant multi-annual trends identified across all series, (2) explored the degree of aggregation of the trends using the O-ring statistic, and (3) evaluated observed trends using independent information on ecosystem functional changes in five focal regions. Several differences arose in terms of the patterns of change (the sign, localization and total number of pixels with changes). FASIR presented the highest proportion of changing pixels (32.7%) and GIMMS the lowest (16.2%). PAL and FASIR data sets showed the highest agreement, with a convergence of detected trends on 71.2% of the pixels. Even though positive and negative changes showed substantial spatial aggregation, important differences in the scale of aggregation emerged among the series, with GIMMS showing the smaller scale (≤11 pixels). The independent evaluations suggest higher accuracy in the detection of ecosystem changes among PAL and FASIR series than with GIMMS, as they detected trends that match expected shifts. In fact, this last series eliminated most of the long term patterns over the continent. For example, in the “Eastern Paraguay” and “Uruguay River margins” focal regions, the extensive changes due to land use and land cover change expansion were detected by PAL and FASIR, but completely ignored by GIMMS. Although the technical explanation of the differences remains unclear and needs further exploration, we found that the evaluation of this type of remote sensing tools should not only be focused at the level of assumptions (i.e. physical or mathematical aspects of image processing), but also at the level of results (i.e. contrasting observed patterns with independent proofs of change). We finally present the online collaborative initiative “Land ecosystem change utility for South America”, which facilitates this type of evaluations and helps to identify the most important functional changes of the continent. PMID:27873821
Follow-up for women after treatment for cervical cancer: a systematic review.
Elit, Laurie; Fyles, Anthony W; Devries, Michaela C; Oliver, Thomas K; Fung-Kee-Fung, Michael
2009-09-01
To determine the optimal recommended program for the follow-up of patients who are disease free after completed primary therapy for cervical cancer. Systematic search of MEDLINE, EMBASE and the Cochrane Library databases (1980-November 2007). Seventeen retrospective trials were identified. Most studies reported similar intervals for follow-up and ranged from a low of 9 visits to a high of 28 visits over 5 years. Follow-up visits typically occurred once every 3-4 months for the first 2 years, every 6 months for the next 3 years and then annually until year 10. All 17 trials reported that a physical exam was performed at each visit. Vaginal vault cytology was analyzed in 13 trials. Other routine surveillance tests included chest x-ray, ultrasound, CT scans, MRI, intravenous pyelography and tumour markers. Median time to recurrence ranged from 7-36 months after primary treatment. Rates of recurrence ranged from 8-26% with 14-57% of patients recurring in the pelvis, and 15-61% of patients recurring at distant or multiple sites. Of the 8-26% of patients who experienced disease recurrence, the vast majority, 89-99%, had recurred by year 5. Upon recurrence, median survival was 7-17 months. Asymptomatic recurrent disease was detected using physical exam in 29-71%, chest x-ray in 20-47%, CT in 0-34% and vaginal vault cytology in 0-17% of patients, respectively. There is modest low quality evidence to inform the most appropriate follow-up strategy for patients with cervical cancer who are clinically disease free after receiving primary treatment. Follow-up visits should include a complete physical examination whereas, frequent vaginal vault cytology does not add significantly to the detection of early disease recurrence. Patients should return to annual population-based screening after 5 years of recurrence-free follow-up.
NASA Astrophysics Data System (ADS)
Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.
2016-12-01
The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4-8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening "Function of mask" (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow probability, and cloud probability improved the accuracy of land cover classification. Compared to the original strategy of the CCDC algorithm (500 pixels per class), the use of the optimal strategy improved the classification accuracies substantially (15-percentage point increase in overall accuracy and 4-percentage point increase in minimum accuracy).
2015-2016 Palila abundance estimates
Camp, Richard J.; Brinck, Kevin W.; Banko, Paul C.
2016-01-01
The palila (Loxioides bailleui) population was surveyed annually during 1998−2016 on Mauna Kea Volcano to determine abundance, population trend, and spatial distribution. In the latest surveys, the 2015 population was estimated at 852−1,406 birds (point estimate: 1,116) and the 2016 population was estimated at 1,494−2,385 (point estimate: 1,934). Similar numbers of palila were detected during the first and subsequent counts within each year during 2012−2016; the proportion of the total annual detections in each count ranged from 46% to 56%; and there was no difference in the detection probability due to count sequence. Furthermore, conducting repeat counts improved the abundance estimates by reducing the width of the confidence intervals between 9% and 32% annually. This suggests that multiple counts do not affect bird or observer behavior and can be continued in the future to improve the precision of abundance estimates. Five palila were detected on supplemental survey stations in the Ka‘ohe restoration area, outside the core survey area but still within Palila Critical Habitat (one in 2015 and four in 2016), suggesting that palila are present in habitat that is recovering from cattle grazing on the southwest slope. The average rate of decline during 1998−2016 was 150 birds per year. Over the 18-year monitoring period, the estimated rate of change equated to a 58% decline in the population.
Survival of female Lesser Scaup: Effects of body size, age, and reproductive effort
Rotella, J.J.; Clark, R.G.; Afton, A.D.
2003-01-01
In birds, larger females generally have greater breeding propensity, reproductive investment, and success than do smaller females. However, optimal female body size also depends on how natural selection acts during other parts of the life cycle. Larger female Lesser Scaup (Aythya affinis) produce larger eggs than do smaller females, and ducklings from larger eggs survive better than those hatching from smaller eggs. Accordingly, we examined patterns of apparent annual survival for female scaup and tested whether natural selection on female body size primarily was stabilizing, a frequent assumption in studies of sexually dimorphic species in which males are the larger sex, or was directional, counter-acting reproductive advantages of large size. We estimated survival using mark-recapture methods for individually marked females from two study sites in Canada (Erickson, Manitoba; St. Denis, Saskatchewan). Structurally larger (adults) and heavier (ducklings) females had lower survival than did smaller individuals in Manitoba; no relationship was detected in adults from Saskatchewan. Survival of adult females declined with indices of increasing reproductive effort at both sites; consequently, the cost of reproduction could explain age-related patterns of breeding propensity in scaup. Furthermore, if larger females are more likely to breed than are smaller females, then cost of reproduction also may help explain why survival was lower for larger females. Overall, we found that advantages of large body size of female scaup during breeding or as young ducklings apparently were counteracted by natural selection favoring lightweight juveniles and structurally smaller adult females through higher annual survival.
Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi
2015-12-01
A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (<7) were extracted more efficiently under acidic conditions and antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.
Model-Based Design of Tree WSNs for Decentralized Detection †
Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam
2015-01-01
The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches. PMID:26307989
Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario
2014-01-01
In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.
NASA Astrophysics Data System (ADS)
Smettem, Keith; Waring, Richard; Callow, Nik; Wilson, Melissa; Mu, Qiaozhen
2013-04-01
There is increasing concern that widespread forest decline could occur in regions of the world where droughts are predicted to increase in frequency and severity as a result of climate change. Ecological optimality proposes that the long term average canopy size of undisturbed perennial vegetation is tightly coupled to climate. The average annual leaf area index (LAI) is an indicator of canopy cover and the difference between the annual maximum and minimum LAI is an indicator of annual leaf turnover. In this study we analysed satellite-derived estimates of monthly LAI across forested coastal catchments of South-west Western Australia over a 12 year period (2000-2011) that included the driest year on record for the last 60 years. We observed that over the 12 year study period, the spatial pattern of average annual satellite-derived LAI values was linearly related to mean annual rainfall. However, inter-annual changes to LAI in response to changes in annual rainfall were far less than expected from the long-term LAI-rainfall trend. This buffered response was investigated using a physiological growth model and attributed to availability of deep soil moisture and/or groundwater storage. The maintenance of high LAIs may be linked to a long term decline in areal average underground water storage storage and diminished summer flows, with a trend towards more ephemeral flow regimes.
NASA Astrophysics Data System (ADS)
Min, Qing-xu; Zhu, Jun-zhen; Feng, Fu-zhou; Xu, Chao; Sun, Ji-wei
2017-06-01
In this paper, the lock-in vibrothermography (LVT) is utilized for defect detection. Specifically, for a metal plate with an artificial fatigue crack, the temperature rise of the defective area is used for analyzing the influence of different test conditions, i.e. engagement force, excitation intensity, and modulated frequency. The multivariate nonlinear and logistic regression models are employed to estimate the POD (probability of detection) and POA (probability of alarm) of fatigue crack, respectively. The resulting optimal selection of test conditions is presented. The study aims to provide an optimized selection method of the test conditions in the vibrothermography system with the enhanced detection ability.
ERIC Educational Resources Information Center
Office of Inspector General (ED), Washington, DC.
The Office of Inspector General (OIG), mandated to provide audit, investigation, fraud detection and prevention, and some security services to the U.S. Department of Education, presents its third semi-annual report in this document. OIG audit activities are recounted in the first section, which details audit accomplishments and highlights audits…
Estimating actual evapotranspiration for forested sites: modifications to the Thornthwaite Model
Randall K. Kolka; Ann T. Wolf
1998-01-01
A previously coded version of the Thornthwaite water balance model was used to estimate annual actual evapotranspiration (AET) for 29 forested sites between 1900 and 1993 in the Upper Great Lakes area. Approximately 8 percent of the data sets calculated AET in error. Errors were detected in months when estimated AET was greater than potential evapotranspiration. Annual...
IN VITRO KILLING OF PERKINSUS MARINUS BY HEMOCYTES OF OYSTERS CRASSOSTREA VIRGINICA
Presented at the 92nd Annual Meeting of the National Shellfisheries Association, 19-23 March 2000, Seattle, WA.
A colorimetric microbicidal assay was adapted, optimized and used in experiments to characterize the capacity of eastern oyster (Crassostrea virginica) hemocytes...
NASA Technical Reports Server (NTRS)
Melton, Robert G. (Editor); Wood, Lincoln J. (Editor); Thompson, Roger C. (Editor); Kerridge, Stuart J. (Editor)
1993-01-01
Papers from the third annual Spaceflight Mechanics Meeting are presented. The topics covered include the following: attitude dynamics and control; large flexible structures; intercept and rendezvous; rendezvous and orbit transfer; and trajectory optimization.
Optimizing Probability of Detection Point Estimate Demonstration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.
Maximizing the Biochemical Resolving Power of Fluorescence Microscopy
Esposito, Alessandro; Popleteeva, Marina; Venkitaraman, Ashok R.
2013-01-01
Most recent advances in fluorescence microscopy have focused on achieving spatial resolutions below the diffraction limit. However, the inherent capability of fluorescence microscopy to non-invasively resolve different biochemical or physical environments in biological samples has not yet been formally described, because an adequate and general theoretical framework is lacking. Here, we develop a mathematical characterization of the biochemical resolution in fluorescence detection with Fisher information analysis. To improve the precision and the resolution of quantitative imaging methods, we demonstrate strategies for the optimization of fluorescence lifetime, fluorescence anisotropy and hyperspectral detection, as well as different multi-dimensional techniques. We describe optimized imaging protocols, provide optimization algorithms and describe precision and resolving power in biochemical imaging thanks to the analysis of the general properties of Fisher information in fluorescence detection. These strategies enable the optimal use of the information content available within the limited photon-budget typically available in fluorescence microscopy. This theoretical foundation leads to a generalized strategy for the optimization of multi-dimensional optical detection, and demonstrates how the parallel detection of all properties of fluorescence can maximize the biochemical resolving power of fluorescence microscopy, an approach we term Hyper Dimensional Imaging Microscopy (HDIM). Our work provides a theoretical framework for the description of the biochemical resolution in fluorescence microscopy, irrespective of spatial resolution, and for the development of a new class of microscopes that exploit multi-parametric detection systems. PMID:24204821
Optimization of entanglement witnesses
NASA Astrophysics Data System (ADS)
Lewenstein, M.; Kraus, B.; Cirac, J. I.; Horodecki, P.
2000-11-01
An entanglement witness (EW) is an operator that allows the detection of entangled states. We give necessary and sufficient conditions for such operators to be optimal, i.e., to detect entangled states in an optimal way. We show how to optimize general EW, and then we particularize our results to the nondecomposable ones; the latter are those that can detect positive partial transpose entangled states (PPTES's). We also present a method to systematically construct and optimize this last class of operators based on the existence of ``edge'' PPTES's, i.e., states that violate the range separability criterion [Phys. Lett. A 232, 333 (1997)] in an extreme manner. This method also permits a systematic construction of nondecomposable positive maps (PM's). Our results lead to a sufficient condition for entanglement in terms of nondecomposable EW's and PM's. Finally, we illustrate our results by constructing optimal EW acting on H=C2⊗C4. The corresponding PM's constitute examples of PM's with minimal ``qubit'' domains, or-equivalently-minimal Hermitian conjugate codomains.
2017-01-01
The Annual Crop Planning (ACP) problem was a recently introduced problem in the literature. This study further expounds on this problem by presenting a new mathematical formulation, which is based on market economic factors. To determine solutions, a new local search metaheuristic algorithm is investigated which is called the enhanced Best Performance Algorithm (eBPA). eBPA’s results are compared against two well-known local search metaheuristic algorithms; these include Tabu Search and Simulated Annealing. The results show the potential of the eBPA for continuous optimization problems. PMID:28792495
Overview of field gamma spectrometries based on Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Santana, Victor M; Alday, Josu G; Lee, HyoHyeMi; Allen, Katherine A; Marrs, Rob H
2016-01-01
A present challenge in fire ecology is to optimize management techniques so that ecological services are maximized and C emissions minimized. Here, we modeled the effects of different prescribed-burning rotation intervals and wildfires on carbon emissions (present and future) in British moorlands. Biomass-accumulation curves from four Calluna-dominated ecosystems along a north-south gradient in Great Britain were calculated and used within a matrix-model based on Markov Chains to calculate above-ground biomass-loads and annual C emissions under different prescribed-burning rotation intervals. Additionally, we assessed the interaction of these parameters with a decreasing wildfire return intervals. We observed that litter accumulation patterns varied between sites. Northern sites (colder and wetter) accumulated lower amounts of litter with time than southern sites (hotter and drier). The accumulation patterns of the living vegetation dominated by Calluna were determined by site-specific conditions. The optimal prescribed-burning rotation interval for minimizing annual carbon emissions also differed between sites: the optimal rotation interval for northern sites was between 30 and 50 years, whereas for southern sites a hump-backed relationship was found with the optimal interval either between 8 to 10 years or between 30 to 50 years. Increasing wildfire frequency interacted with prescribed-burning rotation intervals by both increasing C emissions and modifying the optimum prescribed-burning interval for minimum C emission. This highlights the importance of studying site-specific biomass accumulation patterns with respect to environmental conditions for identifying suitable fire-rotation intervals to minimize C emissions.
The complete proof on the optimal ordering policy under cash discount and trade credit
NASA Astrophysics Data System (ADS)
Chung, Kun-Jen
2010-04-01
Huang ((2005), 'Buyer's Optimal Ordering Policy and Payment Policy under Supplier Credit', International Journal of Systems Science, 36, 801-807) investigates the buyer's optimal ordering policy and payment policy under supplier credit. His inventory model is correct and interesting. Basically, he uses an algebraic method to locate the optimal solution of the annual total relevant cost TRC(T) and ignores the role of the functional behaviour of TRC(T) in locating the optimal solution of it. However, as argued in this article, Huang needs to explore the functional behaviour of TRC(T) to justify his solution. So, from the viewpoint of logic, the proof about Theorem 1 in Huang has some shortcomings such that the validity of Theorem 1 in Huang is questionable. The main purpose of this article is to remove and correct those shortcomings in Huang and present the complete proofs for Huang.
Olugbara, Oludayo
2014-01-01
This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms—being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem. PMID:24883369
Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah
2014-01-01
This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.
A novel method for overlapping community detection using Multi-objective optimization
NASA Astrophysics Data System (ADS)
Ebrahimi, Morteza; Shahmoradi, Mohammad Reza; Heshmati, Zainabolhoda; Salehi, Mostafa
2018-09-01
The problem of community detection as one of the most important applications of network science can be addressed effectively by multi-objective optimization. In this paper, we aim to present a novel efficient method based on this approach. Also, in this study the idea of using all Pareto fronts to detect overlapping communities is introduced. The proposed method has two main advantages compared to other multi-objective optimization based approaches. The first advantage is scalability, and the second is the ability to find overlapping communities. Despite most of the works, the proposed method is able to find overlapping communities effectively. The new algorithm works by extracting appropriate communities from all the Pareto optimal solutions, instead of choosing the one optimal solution. Empirical experiments on different features of separated and overlapping communities, on both synthetic and real networks show that the proposed method performs better in comparison with other methods.
The performance of matched-field track-before-detect methods using shallow-water Pacific data.
Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem
2002-07-01
Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.
Zheng, Qianwang; Mikš-Krajnik, Marta; Yang, Yishan; Xu, Wang; Yuk, Hyun-Gyun
2014-09-01
Conventional culture detection methods are time consuming and labor-intensive. For this reason, an alternative rapid method combining real-time PCR and immunomagnetic separation (IMS) was investigated in this study to detect both healthy and heat-injured Salmonella Typhimurium on raw duck wings. Firstly, the IMS method was optimized by determining the capture efficiency of Dynabeads(®) on Salmonella cells on raw duck wings with different bead incubation (10, 30 and 60 min) and magnetic separation (3, 10 and 30 min) times. Secondly, three Taqman primer sets, Sal, invA and ttr, were evaluated to optimize the real-time PCR protocol by comparing five parameters: inclusivity, exclusivity, PCR efficiency, detection probability and limit of detection (LOD). Thirdly, the optimized real-time PCR, in combination with IMS (PCR-IMS) assay, was compared with a standard ISO and a real-time PCR (PCR) method by analyzing artificially inoculated raw duck wings with healthy and heat-injured Salmonella cells at 10(1) and 10(0) CFU/25 g. Finally, the optimized PCR-IMS assay was validated for Salmonella detection in naturally contaminated raw duck wing samples. Under optimal IMS conditions (30 min bead incubation and 3 min magnetic separation times), approximately 85 and 64% of S. Typhimurium cells were captured by Dynabeads® from pure culture and inoculated raw duck wings, respectively. Although Sal and ttr primers exhibited 100% inclusivity and exclusivity for 16 Salmonella spp. and 36 non-Salmonella strains, the Sal primer showed lower LOD (10(3) CFU/ml) and higher PCR efficiency (94.1%) than the invA and ttr primers. Moreover, for Sal and invA primers, 100% detection probability on raw duck wings suspension was observed at 10(3) and 10(4) CFU/ml with and without IMS, respectively. Thus, the Sal primer was chosen for further experiments. The optimized PCR-IMS method was significantly (P=0.0011) better at detecting healthy Salmonella cells after 7-h enrichment than traditional PCR method. However there was no significant difference between the two methods with longer enrichment time (14 h). The diagnostic accuracy of PCR-IMS was shown to be 98.3% through the validation study. These results indicate that the optimized PCR-IMS method in this study could provide a sensitive, specific and rapid detection method for Salmonella on raw duck wings, enabling 10-h detection. However, a longer enrichment time could be needed for resuscitation and reliable detection of heat-injured cells. Copyright © 2014 Elsevier B.V. All rights reserved.
Optic disc detection using ant colony optimization
NASA Astrophysics Data System (ADS)
Dias, Marcy A.; Monteiro, Fernando C.
2012-09-01
The retinal fundus images are used in the treatment and diagnosis of several eye diseases, such as diabetic retinopathy and glaucoma. This paper proposes a new method to detect the optic disc (OD) automatically, due to the fact that the knowledge of the OD location is essential to the automatic analysis of retinal images. Ant Colony Optimization (ACO) is an optimization algorithm inspired by the foraging behaviour of some ant species that has been applied in image processing for edge detection. Recently, the ACO was used in fundus images to detect edges, and therefore, to segment the OD and other anatomical retinal structures. We present an algorithm for the detection of OD in the retina which takes advantage of the Gabor wavelet transform, entropy and ACO algorithm. Forty images of the retina from DRIVE database were used to evaluate the performance of our method.
Economic trade-offs between genetic improvement and longevity in dairy cattle.
De Vries, A
2017-05-01
Genetic improvement in sires used for artificial insemination (AI) is increasing faster compared with a decade ago. The genetic merit of replacement heifers is also increasing faster and the genetic lag with older cows in the herd increases. This may trigger greater cow culling to capture this genetic improvement. On the other hand, lower culling rates are often viewed favorably because the costs and environmental effects of maintaining herd size are generally lower. Thus, there is an economic trade-off between genetic improvement and longevity in dairy cattle. The objective of this study was to investigate the principles, literature, and magnitude of these trade-offs. Data from the Council on Dairy Cattle Breeding show that the estimated breeding value of the trait productive life has increased for 50 yr but the actual time cows spend in the herd has not increased. The average annual herd cull rate remains at approximately 36% and cow longevity is approximately 59 mo. The annual increase in average estimated breeding value of the economic index lifetime net merit of Holstein sires is accelerating from $40/yr when the sire entered AI around 2002 to $171/yr for sires that entered AI around 2012. The expectation is therefore that heifers born in 2015 are approximately $50 more profitable per lactation than heifers born in 2014. Asset replacement theory shows that assets should be replaced sooner when the challenging asset is technically improved. Few studies have investigated the direct effects of genetic improvement on optimal cull rates. A 35-yr-old study found that the economically optimal cull rates were in the range of 25 to 27%, compared with the lowest possible involuntary cull rate of 20%. Only a small effect was observed of using the best surviving dams to generate the replacement heifer calves. Genetic improvement from sires had little effect on the optimal cull rate. Another study that optimized culling decisions for individual cows also showed that the effect of changes in genetic improvement of milk revenue minus feed cost on herd longevity was relatively small. Reduced involuntary cull rates improved profitability, but also increased optimal voluntary culling. Finally, an economically optimal culling model with prices from 2015 confirmed that optimal annual cull rates were insensitive to heifer prices and therefore insensitive to genetic improvement in heifers. In conclusion, genetic improvement is important but does not warrant short cow longevity. Economic cow longevity continues to depends more on cow depreciation than on accelerated genetic improvements in heifers. This is confirmed by old and new studies. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Network-Based Real-time Integrated Fire Detection and Alarm (FDA) System with Building Automation
NASA Astrophysics Data System (ADS)
Anwar, F.; Boby, R. I.; Rashid, M. M.; Alam, M. M.; Shaikh, Z.
2017-11-01
Fire alarm systems have become increasingly an important lifesaving technology in many aspects, such as applications to detect, monitor and control any fire hazard. A large sum of money is being spent annually to install and maintain the fire alarm systems in buildings to protect property and lives from the unexpected spread of fire. Several methods are already developed and it is improving on a daily basis to reduce the cost as well as increase quality. An integrated Fire Detection and Alarm (FDA) systems with building automation was studied, to reduce cost and improve their reliability by preventing false alarm. This work proposes an improved framework for FDA system to ensure a robust intelligent network of FDA control panels in real-time. A shortest path algorithmic was chosen for series of buildings connected by fiber optic network. The framework shares information and communicates with each fire alarm panels connected in peer to peer configuration and declare the network state using network address declaration from any building connected in network. The fiber-optic connection was proposed to reduce signal noises, thus increasing large area coverage, real-time communication and long-term safety. Based on this proposed method an experimental setup was designed and a prototype system was developed to validate the performance in practice. Also, the distributed network system was proposed to connect with an optional remote monitoring terminal panel to validate proposed network performance and ensure fire survivability where the information is sequentially transmitted. The proposed FDA system is different from traditional fire alarm and detection system in terms of topology as it manages group of buildings in an optimal and efficient manner.Introduction
NASA Technical Reports Server (NTRS)
Hilsenrath, E.; Heath, D. F.; Schlesinger, B. M.
1978-01-01
The first two years of Backscattered Ultraviolet (BUV) ozone data from the Nimbus-4 spacecraft were reprocessed. The seasonal variations of total ozone for the period April 1970 to April 1972 are described using daily zonal means to 10 deg latitude zones and a time-latitude cross section. In addition, the BUV data are compared with analyzed Dobson data and with IRIS data also obtained from the Nimbus-4 spacecraft. A harmonic analysis was performed on the daily zonal means. Amplitudes, days of peaks, and percentage of variance were computed for annual and semi-annual waves and for higher harmonics of an annual period for the two years. Asymmetries are found in the annual waves in the two hemispheres, with a subtle interannual difference which may be due to changes in the general circulation. A significant semi-annual component is detected in the tropics for the first year, which appears to result from influences of the annual waves in the two hemispheres.
Structural damage identification using an enhanced thermal exchange optimization algorithm
NASA Astrophysics Data System (ADS)
Kaveh, A.; Dadras, A.
2018-03-01
The recently developed optimization algorithm-the so-called thermal exchange optimization (TEO) algorithm-is enhanced and applied to a damage detection problem. An offline parameter tuning approach is utilized to set the internal parameters of the TEO, resulting in the enhanced heat transfer optimization (ETEO) algorithm. The damage detection problem is defined as an inverse problem, and ETEO is applied to a wide range of structures. Several scenarios with noise and noise-free modal data are tested and the locations and extents of damages are identified with good accuracy.
Time and frequency constrained sonar signal design for optimal detection of elastic objects.
Hamschin, Brandon; Loughlin, Patrick J
2013-04-01
In this paper, the task of model-based transmit signal design for optimizing detection is considered. Building on past work that designs the spectral magnitude for optimizing detection, two methods for synthesizing minimum duration signals with this spectral magnitude are developed. The methods are applied to the design of signals that are optimal for detecting elastic objects in the presence of additive noise and self-noise. Elastic objects are modeled as linear time-invariant systems with known impulse responses, while additive noise (e.g., ocean noise or receiver noise) and acoustic self-noise (e.g., reverberation or clutter) are modeled as stationary Gaussian random processes with known power spectral densities. The first approach finds the waveform that preserves the optimal spectral magnitude while achieving the minimum temporal duration. The second approach yields a finite-length time-domain sequence by maximizing temporal energy concentration, subject to the constraint that the spectral magnitude is close (in a least-squares sense) to the optimal spectral magnitude. The two approaches are then connected analytically, showing the former is a limiting case of the latter. Simulation examples that illustrate the theory are accompanied by discussions that address practical applicability and how one might satisfy the need for target and environmental models in the real-world.
A space-time look at two-phase estimation for improved annual inventory estimates
Jay Breidt; Jean Opsomer; Xiyue Liao; Gretchen Moisen
2015-01-01
Over the past several years, three sets of new temporal remote sensing data have become available improving FIAâs ability to detect, characterize and forecast land cover changes. First, historic Landsat data has been processed for the conterminous US to provide disturbance history, agents of change, and fitted spectral trajectories annually over the last 30+ years at...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behboodi, Sahand; Chassin, David P.; Djilali, Ned
This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less
NASA Astrophysics Data System (ADS)
Salami, Adebayo Wahab; Sule, Bolaji Fatai; Adunkpe, Tope Lacroix; Ayanshola, Ayanniyi Mufutau; Bilewu, Solomon Olakunle
2017-03-01
Optimization models have been developed to maximize annual energy generation from the Doma dam, subject to the constraint of releases for irrigation, ecological purposes, the water supply, the maximum yield from the reservoir and reservoir storage. The model was solved with LINGO software for various mean annual inflow exceedence probabilities. Two scenarios of hydropower retrofitting were considered. Scenario 1, with the reservoir inflows at 50%, 75%, and 90% probabilities of exceedence, gives the total annual hydropower as 0.531 MW, 0.450 MW and 0.291 MW, respectively. The corresponding values for scenario 2 were 0.615 MW, 0.507 MW, and 0.346 MW respectively. The study also considered increasing the reservoir's live storage to 32.63Mm3 by taking part of the flood storage so that the maximum draft increases to 7 Mm3. With this upper limit of storage and draft with reservoir inflows of 50%, 75% and 90% probabilities of exceedence, the hydropower generated increased to 0.609 MW, 0.540 MW, and 0.347 MW respectively for the scenario 1 arrangement, while those of scenario 2 increased to 0.699 MW, 0.579MW and 0.406 MW respectively. The results indicate that the Doma Dam is suitable for the production of hydroelectric power and that its generation potential is between 0.61 MW and 0.70 MW.
Gu, Yingxin; Wylie, Bruce K.
2015-01-01
Cultivating annual row crops in high topographic relief waterway buffers has negative environmental effects and can be environmentally unsustainable. Growing perennial grasses such as switchgrass (Panicum virgatum L.) for biomass (e.g., cellulosic biofuel feedstocks) instead of annual row crops in these high relief waterway buffers can improve local environmental conditions (e.g., reduce soil erosion and improve water quality through lower use of fertilizers and pesticides) and ecosystem services (e.g., minimize drought and flood impacts on production; improve wildlife habitat, plant vigor, and nitrogen retention due to post-senescence harvest for cellulosic biofuels; and serve as carbon sinks). The main objectives of this study are to: (1) identify cropland areas with high topographic relief (high runoff potentials) and high switchgrass productivity potential in eastern Nebraska that may be suitable for growing switchgrass, and (2) estimate the total switchgrass production gain from the potential biofuel areas. Results indicate that about 140,000 hectares of waterway buffers in eastern Nebraska are suitable for switchgrass development and the total annual estimated switchgrass biomass production for these suitable areas is approximately 1.2 million metric tons. The resulting map delineates high topographic relief croplands and provides useful information to land managers and biofuel plant investors to make optimal land use decisions regarding biofuel crop development and ecosystem service optimization in eastern Nebraska.
Behboodi, Sahand; Chassin, David P.; Djilali, Ned; ...
2016-12-23
This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less
Pfaller, Joseph B; Bjorndal, Karen A; Chaloupka, Milani; Williams, Kristina L; Frick, Michael G; Bolten, Alan B
2013-01-01
Assessments of population trends based on time-series counts of individuals are complicated by imperfect detection, which can lead to serious misinterpretations of data. Population trends of threatened marine turtles worldwide are usually based on counts of nests or nesting females. We analyze 39 years of nest-count, female-count, and capture-mark-recapture (CMR) data for nesting loggerhead turtles (Caretta caretta) on Wassaw Island, Georgia, USA. Annual counts of nests and females, not corrected for imperfect detection, yield significant, positive trends in abundance. However, multistate open robust design modeling of CMR data that accounts for changes in imperfect detection reveals that the annual abundance of nesting females has remained essentially constant over the 39-year period. The dichotomy could result from improvements in surveys or increased within-season nest-site fidelity in females, either of which would increase detection probability. For the first time in a marine turtle population, we compare results of population trend analyses that do and do not account for imperfect detection and demonstrate the potential for erroneous conclusions. Past assessments of marine turtle population trends based exclusively on count data should be interpreted with caution and re-evaluated when possible. These concerns apply equally to population assessments of all species with imperfect detection.
An experimental sample of the field gamma-spectrometer based on solid state Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Effects of loss on the phase sensitivity with parity detection in an SU(1,1) interferometer
NASA Astrophysics Data System (ADS)
Li, Dong; Yuan, Chun-Hua; Yao, Yao; Jiang, Wei; Li, Mo; Zhang, Weiping
2018-05-01
We theoretically study the effects of loss on the phase sensitivity of an SU(1,1) interferometer with parity detection with various input states. We show that although the sensitivity of phase estimation decreases in the presence of loss, it can still beat the shot-noise limit with small loss. To examine the performance of parity detection, the comparison is performed among homodyne detection, intensity detection, and parity detection. Compared with homodyne detection and intensity detection, parity detection has a slight better optimal phase sensitivity in the absence of loss, but has a worse optimal phase sensitivity with a significant amount of loss with one-coherent state or coherent $\\otimes$ squeezed state input.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rood, Arthur S.; Sondrup, A. Jeffrey
This report presents an evaluation of a hypothetical INL Site monitoring network and the existing INL air monitoring network using frequency of detection methods. The hypothetical network was designed to address the requirement in 40 CFR Part 61, Subpart H (2006) that “emissions of radionuclides to ambient air from U.S. DOE facilities shall not exceed those amounts that would cause any member of the public to receive in any year an effective dose equivalent exceeding 10 mrem/year.” To meet the requirement for monitoring only, “radionuclide releases that would result in an effective dose of 10% of the standard shall bemore » readily detectable and distinguishable from background.” Thus, the hypothetical network consists of air samplers placed at residence locations that surround INL and at other locations where onsite livestock grazing takes place. Two exposure scenarios were used in this evaluation: a resident scenario and a shepherd/rancher scenario. The resident was assumed to be continuously present at their residence while the shepherd/rancher was assumed to be present 24-hours at a fixed location on the grazing allotment. Important radionuclides were identified from annual INL radionuclide National Emission Standards for Hazardous Pollutants reports. Important radionuclides were defined as those that potentially contribute 1% or greater to the annual total dose at the radionuclide National Emission Standards for Hazardous Pollutants maximally exposed individual location and include H-3, Am-241, Pu-238, Pu 239, Cs-137, Sr-90, and I-131. For this evaluation, the network performance objective was set at achieving a frequency of detection greater than or equal to 95%. Results indicated that the hypothetical network for the resident scenario met all performance objectives for H-3 and I-131 and most performance objectives for Cs-137 and Sr-90. However, all actinides failed to meet the performance objectives for most sources. The shepherd/rancher scenario showed that air samplers placed around the facilities every 22.5 degrees were very effective in detecting releases, but this arrangement is not practical or cost effective. However, it was shown that a few air samplers placed in the prevailing wind direction around each facility could achieve the performance objective of a frequency of detection greater than or equal to 95% for the shepherd/rancher scenario. The results also indicate some of the current sampler locations have little or no impact on the network frequency of detection and could be removed from the network with no appreciable deterioration of performance. Results show that with some slight modifications to the existing network (i.e., additional samplers added north and south of the Materials and Fuels Complex and ineffective samplers removed), the network would achieve performance objectives for all sources for both the resident and shepherd/rancher scenario.« less
Investigating the detection of multi-homed devices independent of operating systems
2017-09-01
timestamp data was used to estimate clock skews using linear regression and linear optimization methods. Analysis revealed that detection depends on...the consistency of the estimated clock skew. Through vertical testing, it was also shown that clock skew consistency depends on the installed...optimization methods. Analysis revealed that detection depends on the consistency of the estimated clock skew. Through vertical testing, it was also
Analysis of trends in selected streamflow statistics for the Concho River Basin, Texas, 1916-2009
Barbie, Dana L.; Wehmeyer, Loren L.; May, Jayne E.
2012-01-01
Six U.S. Geological Survey streamflow-gaging stations were selected for analysis. Streamflow-gaging station 08128000 South Concho River at Christoval has downward trends for annual maximum daily discharge and annual instantaneous peak discharge for the combined period 1931-95, 2002-9. Streamflow-gaging station 08128400 Middle Concho River above Tankersley has downward trends for annual maximum daily discharge and annual instantaneous peak discharge for the combined period 1962-95, 2002-9. Streamflow-gaging station 08128500 Middle Concho River near Tankersley has no significant trends in the streamflow statistics considered for the period 1931-60. Streamflow-gaging station 08134000 North Concho River near Carlsbad has downward trends for annual mean daily discharge, annual 7-day minimum daily discharge, annual maximum daily discharge, and annual instantaneous peak discharge for the period 1925-2009. Streamflow-gaging stations 08136000 Concho River at San Angelo and 08136500 Concho River at Paint Rock have downward trends for 1916-2009 for all streamflow statistics calculated, but streamflow-gaging station 08136000 Concho River at San Angelo has an upward trend for annual maximum daily discharge during 1964-2009. The downward trends detected during 1916-2009 for the Concho River at San Angelo are not unexpected because of three reservoirs impounding and profoundly regulating streamflow.
Identifying trends in sediment discharge from alterations in upstream land use
Parker, R.S.; Osterkamp, W.R.
1995-01-01
Environmental monitoring is a primary reason for collecting sediment data. One emphasis of this monitoring is identification of trends in suspended sediment discharge. A stochastic equation was used to generate time series of annual suspended sediment discharges using statistics from gaging stations with drainage areas between 1606 and 1 805 230 km2. Annual sediment discharge was increased linearly to yield a given increase at the end of a fixed period and trend statistics were computed for each simulation series using Kendal's tau (at 0.05 significance level). A parameter was calculated from two factors that control trend detection time: (a) the magnitude of change in sediment discharge, and (b) the natural variability of sediment discharge. In this analysis the detection of a trend at most stations is well over 100 years for a 20% increase in sediment discharge. Further research is needed to assess the sensitivity of detecting trends at sediment stations.
Optimizing computer-aided colonic polyp detection for CT colonography by evolving the Pareto front1
Li, Jiang; Huang, Adam; Yao, Jack; Liu, Jiamin; Van Uitert, Robert L.; Petrick, Nicholas; Summers, Ronald M.
2009-01-01
A multiobjective genetic algorithm is designed to optimize a computer-aided detection (CAD) system for identifying colonic polyps. Colonic polyps appear as elliptical protrusions on the inner surface of the colon. Curvature-based features for colonic polyp detection have proved to be successful in several CT colonography (CTC) CAD systems. Our CTC CAD program uses a sequential classifier to form initial polyp detections on the colon surface. The classifier utilizes a set of thresholds on curvature-based features to cluster suspicious colon surface regions into polyp candidates. The thresholds were previously chosen experimentally by using feature histograms. The chosen thresholds were effective for detecting polyps sized 10 mm or larger in diameter. However, many medium-sized polyps, 6–9 mm in diameter, were missed in the initial detection procedure. In this paper, the task of finding optimal thresholds as a multiobjective optimization problem was formulated, and a genetic algorithm to solve it was utilized by evolving the Pareto front of the Pareto optimal set. The new CTC CAD system was tested on 792 patients. The sensitivities of the optimized system improved significantly, from 61.68% to 74.71% with an increase of 13.03% (95% CI [6.57%, 19.5%], p=7.78×10−5) for the size category of 6–9 mm polyps, from 65.02% to 77.4% with an increase of 12.38% (95% CI [6.23%, 18.53%], p=7.95×10−5) for polyps 6 mm or larger, and from 82.2% to 90.58% with an increase of 8.38% (95%CI [0.75%, 16%], p=0.03) for polyps 8 mm or larger at comparable false positive rates. The sensitivities of the optimized system are nearly equivalent to those of expert radiologists. PMID:19235388
Factors affecting species distribution predictions: A simulation modeling experiment
Gordon C. Reese; Kenneth R. Wilson; Jennifer A. Hoeting; Curtis H. Flather
2005-01-01
Geospatial species sample data (e.g., records with location information from natural history museums or annual surveys) are rarely collected optimally, yet are increasingly used for decisions concerning our biological heritage. Using computer simulations, we examined factors that could affect the performance of autologistic regression (ALR) models that predict species...
A major challenge for society in the 21st century will be replacement, design and optimal management of urban infrastructure. It is estimated that the current world wide demand for infrastructure investment is approximately three trillion US dollars annually. Many developing coun...
USDA-ARS?s Scientific Manuscript database
Nitrogen fertilization of forage grasses is critical for optimizing biomass and utilization of manure soil nutrients. Field studies were conducted in 2007-09 to determine the effects of spring N fertilization on amelioration of high soil P when cool-season, annual ryegrass (Lolium multiflorum L.) is...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-08
..., scientifically-based guidance, training, program evaluation, and technical assistance. B. Research/Cooperative... officials. NACCHO values guide staff and leadership in work to achieve optimal health for all through an... annual appropriations and successful performance. III. Paper Application, Registration, and Submission...
Alejo-Alvarez, Luz; Guzmán-Fierro, Víctor; Fernández, Katherina; Roeckel, Marlene
2016-11-01
A full-scale process for the treatment of 80 tons per day of poultry manure was designed and optimized. A total ammonia nitrogen (TAN) balance was performed at steady state, considering the stoichiometry and the kinetic data from the anaerobic digestion and the anaerobic ammonia oxidation. The equipment, reactor design, investment costs, and operational costs were considered. The volume and cost objective functions optimized the process in terms of three variables: the water recycle ratio, the protein conversion during AD, and the TAN conversion in the process. The processes were compared with and without water recycle; savings of 70% and 43% in the annual fresh water consumption and the heating costs, respectively, were achieved. The optimal process complies with the Chilean environmental legislation limit of 0.05 g total nitrogen/L.
Sears, Lindsay E; Coberley, Carter R; Pope, James E
2016-07-01
The aim of this study was to examine the direct and mediated effects of a telephonic health coaching program on changes to healthy behaviors, life satisfaction, and optimism. This longitudinal correlational study of 4881 individuals investigated simple and mediated relationships between participation in a telephonic health risk coaching program and outcomes from three annual Well-being Assessments. Program participation was directly related to improvements in healthy behaviors, life satisfaction and optimism, and indirect effects of coaching on these variables concurrently and over a one-year time lag were also supported. Given previous research that improvements to life satisfaction, optimism, and health behaviors are valuable for individuals, employers, and communities, a clearer understanding of intervention approaches that may impact these outcomes simultaneously can drive greater program effectiveness and value on investment.
Optimization of the MINERVA Exoplanet Search Strategy via Simulations
NASA Astrophysics Data System (ADS)
Nava, Chantell; Johnson, Samson; McCrady, Nate; Minerva
2015-01-01
Detection of low-mass exoplanets requires high spectroscopic precision and high observational cadence. MINERVA is a dedicated observatory capable of sub meter-per-second radial velocity precision. As a dedicated observatory, MINERVA can observe with every-clear-night cadence that is essential for low-mass exoplanet detection. However, this cadence complicates the determination of an optimal observing strategy. We simulate MINERVA observations to optimize our observing strategy and maximize exoplanet detections. A dispatch scheduling algorithm provides observations of MINERVA targets every day over a three-year observing campaign. An exoplanet population with a distribution informed by Kepler statistics is assigned to the targets, and radial velocity curves induced by the planets are constructed. We apply a correlated noise model that realistically simulates stellar astrophysical noise sources. The simulated radial velocity data is fed to the MINERVA planet detection code and the expected exoplanet yield is calculated. The full simulation provides a tool to test different strategies for scheduling observations of our targets and optimizing the MINERVA exoplanet search strategy.
Weiss, Lee; Thé, Jesse; Winter, Jennifer; Gharabaghi, Bahram
2018-04-18
Excessive phosphorus loading to inland freshwater lakes around the globe has resulted in nuisance plant growth along the waterfronts, degraded habitat for cold water fisheries, and impaired beaches, marinas and waterfront property. The direct atmospheric deposition of phosphorus can be a significant contributing source to inland lakes. The atmospheric deposition monitoring program for Lake Simcoe, Ontario indicates roughly 20% of the annual total phosphorus load (2010-2014 period) is due to direct atmospheric deposition (both wet and dry deposition) on the lake. This novel study presents a first-time application of the Genetic Algorithm (GA) methodology to optimize the application of best management practices (BMPs) related to agriculture and mobile sources to achieve atmospheric phosphorus reduction targets and restore the ecological health of the lake. The novel methodology takes into account the spatial distribution of the emission sources in the airshed, the complex atmospheric long-range transport and deposition processes, cost and efficiency of the popular management practices and social constraints related to the adoption of BMPs. The optimization scenarios suggest that the optimal overall capital investment of approximately $2M, $4M, and $10M annually can achieve roughly 3, 4 and 5 tonnes reduction in atmospheric P load to the lake, respectively. The exponential trend indicates diminishing returns for the investment beyond roughly $3M per year and that focussing much of this investment in the upwind, nearshore area will significantly impact deposition to the lake. The optimization is based on a combination of the lowest-cost, most-beneficial and socially-acceptable management practices that develops a science-informed promotion of implementation/BMP adoption strategy. The geospatial aspect to the optimization (i.e. proximity and location with respect to the lake) will help land managers to encourage the use of these targeted best practices in areas that will most benefit from the phosphorus reduction approach.
Al-Aqeeli, Yousif H; Lee, T S; Abd Aziz, S
2016-01-01
Achievement of the optimal hydropower generation from operation of water reservoirs, is a complex problems. The purpose of this study was to formulate and improve an approach of a genetic algorithm optimization model (GAOM) in order to increase the maximization of annual hydropower generation for a single reservoir. For this purpose, two simulation algorithms were drafted and applied independently in that GAOM during 20 scenarios (years) for operation of Mosul reservoir, northern Iraq. The first algorithm was based on the traditional simulation of reservoir operation, whilst the second algorithm (Salg) enhanced the GAOM by changing the population values of GA through a new simulation process of reservoir operation. The performances of these two algorithms were evaluated through the comparison of their optimal values of annual hydropower generation during the 20 scenarios of operating. The GAOM achieved an increase in hydropower generation in 17 scenarios using these two algorithms, with the Salg being superior in all scenarios. All of these were done prior adding the evaporation (Ev) and precipitation (Pr) to the water balance equation. Next, the GAOM using the Salg was applied by taking into consideration the volumes of these two parameters. In this case, the optimal values obtained from the GAOM were compared, firstly with their counterpart that found using the same algorithm without taking into consideration of Ev and Pr, secondly with the observed values. The first comparison showed that the optimal values obtained in this case decreased in all scenarios, whilst maintaining the good results compared with the observed in the second comparison. The results proved the effectiveness of the Salg in increasing the hydropower generation through the enhanced approach of the GAOM. In addition, the results indicated to the importance of taking into account the Ev and Pr in the modelling of reservoirs operation.
Riedel, Timothy E; Zimmer-Faust, Amity G; Thulsiraj, Vanessa; Madi, Tania; Hanley, Kaitlyn T; Ebentier, Darcy L; Byappanahalli, Muruleedhara; Layton, Blythe; Raith, Meredith; Boehm, Alexandria B; Griffith, John F; Holden, Patricia A; Shanks, Orin C; Weisberg, Stephen B; Jay, Jennifer A
2014-04-01
Some molecular methods for tracking fecal pollution in environmental waters have both PCR and quantitative PCR (qPCR) assays available for use. To assist managers in deciding whether to implement newer qPCR techniques in routine monitoring programs, we compared detection limits (LODs) and costs of PCR and qPCR assays with identical targets that are relevant to beach water quality assessment. For human-associated assays targeting Bacteroidales HF183 genetic marker, qPCR LODs were 70 times lower and there was no effect of target matrix (artificial freshwater, environmental creek water, and environmental marine water) on PCR or qPCR LODs. The PCR startup and annual costs were the lowest, while the per reaction cost was 62% lower than the Taqman based qPCR and 180% higher than the SYBR based qPCR. For gull-associated assays, there was no significant difference between PCR and qPCR LODs, target matrix did not effect PCR or qPCR LODs, and PCR startup, annual, and per reaction costs were lower. Upgrading to qPCR involves greater startup and annual costs, but this increase may be justified in the case of the human-associated assays with lower detection limits and reduced cost per sample. Copyright © 2014 Elsevier Ltd. All rights reserved.
Riedel, Timothy E.; Zimmer-Faust, Amity G.; Thulsiraj, Vanessa; Madi, Tania; Hanley, Kaitlyn T.; Ebentier, Darcy L.; Byappanahalli, Muruleedhara N.; Layton, Blythe; Raith, Meredith; Boehm, Alexandria B.; Griffith, John F.; Holden, Patricia A.; Shanks, Orin C.; Weisberg, Stephen B.; Jay, Jennifer A.
2014-01-01
Some molecular methods for tracking fecal pollution in environmental waters have both PCR and quantitative PCR (qPCR) assays available for use. To assist managers in deciding whether to implement newer qPCR techniques in routine monitoring programs, we compared detection limits (LODs) and costs of PCR and qPCR assays with identical targets that are relevant to beach water quality assessment. For human-associated assays targeting Bacteroidales HF183 genetic marker, qPCR LODs were 70 times lower and there was no effect of target matrix (artificial freshwater, environmental creek water, and environmental marine water) on PCR or qPCR LODs. The PCR startup and annual costs were the lowest, while the per reaction cost was 62% lower than the Taqman based qPCR and 180% higher than the SYBR based qPCR. For gull-associated assays, there was no significant difference between PCR and qPCR LODs, target matrix did not effect PCR or qPCR LODs, and PCR startup, annual, and per reaction costs were lower. Upgrading to qPCR involves greater startup and annual costs, but this increase may be justified in the case of the human-associated assays with lower detection limits and reduced cost per sample.
Rein, David B; Wittenborn, John S; Zhang, Xinzhi; Allaire, Benjamin A; Song, Michael S; Klein, Ronald; Saaddine, Jinan B
2011-01-01
Objective To determine whether biennial eye evaluation or telemedicine screening are cost-effective alternatives to current recommendations for the estimated 10 million people aged 30–84 with diabetes but no or minimal diabetic retinopathy. Data Sources United Kingdom Prospective Diabetes Study, National Health and Nutrition Examination Survey, American Academy of Ophthalmology Preferred Practice Patterns, Medicare Payment Schedule. Study Design Cost-effectiveness Monte Carlo simulation. Data Collection/Extraction Methods Literature review, analysis of existing surveys. Principal Findings Biennial eye evaluation was the most cost-effective treatment option when the ability to detect other eye conditions was included in the model. Telemedicine was most cost-effective when other eye conditions were not considered or when telemedicine was assumed to detect refractive error. The current annual eye evaluation recommendation was costly compared with either treatment alternative. Self-referral was most cost-effective up to a willingness to pay (WTP) of U.S.$37,600, with either biennial or annual evaluation most cost-effective at higher WTP levels. Conclusions Annual eye evaluations are costly and add little benefit compared with either plausible alternative. More research on the ability of telemedicine to detect other eye conditions is needed to determine whether it is more cost-effective than biennial eye evaluation. PMID:21492158
"Utilizing" signal detection theory.
Lynn, Spencer K; Barrett, Lisa Feldman
2014-09-01
What do inferring what a person is thinking or feeling, judging a defendant's guilt, and navigating a dimly lit room have in common? They involve perceptual uncertainty (e.g., a scowling face might indicate anger or concentration, for which different responses are appropriate) and behavioral risk (e.g., a cost to making the wrong response). Signal detection theory describes these types of decisions. In this tutorial, we show how incorporating the economic concept of utility allows signal detection theory to serve as a model of optimal decision making, going beyond its common use as an analytic method. This utility approach to signal detection theory clarifies otherwise enigmatic influences of perceptual uncertainty on measures of decision-making performance (accuracy and optimality) and on behavior (an inverse relationship between bias magnitude and sensitivity optimizes utility). A "utilized" signal detection theory offers the possibility of expanding the phenomena that can be understood within a decision-making framework. © The Author(s) 2014.
Electrocardiographic signals and swarm-based support vector machine for hypoglycemia detection.
Nuryani, Nuryani; Ling, Steve S H; Nguyen, H T
2012-04-01
Cardiac arrhythmia relating to hypoglycemia is suggested as a cause of death in diabetic patients. This article introduces electrocardiographic (ECG) parameters for artificially induced hypoglycemia detection. In addition, a hybrid technique of swarm-based support vector machine (SVM) is introduced for hypoglycemia detection using the ECG parameters as inputs. In this technique, a particle swarm optimization (PSO) is proposed to optimize the SVM to detect hypoglycemia. In an experiment using medical data of patients with Type 1 diabetes, the introduced ECG parameters show significant contributions to the performance of the hypoglycemia detection and the proposed detection technique performs well in terms of sensitivity and specificity.
Optimization of a chemical identification algorithm
NASA Astrophysics Data System (ADS)
Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren
2010-04-01
A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.
Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil
Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W
2016-01-01
Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.
Fluorescence detection of dental calculus
NASA Astrophysics Data System (ADS)
Gonchukov, S.; Biryukova, T.; Sukhinina, A.; Vdovin, Yu
2010-11-01
This work is devoted to the optimization of fluorescence dental calculus diagnostics in optical spectrum. The optimal wavelengths for fluorescence excitation and registration are determined. Two spectral ranges 620 - 645 nm and 340 - 370 nm are the most convenient for supra- and subgingival calculus determination. The simple implementation of differential method free from the necessity of spectrometer using was investigated. Calculus detection reliability in the case of simple implementation is higher than in the case of spectra analysis at optimal wavelengths. The use of modulated excitation light and narrowband detection of informative signal allows us to decrease essentially its diagnostic intensity even in comparison with intensity of the low level laser dental therapy.
Optimal Sensor Location Design for Reliable Fault Detection in Presence of False Alarms
Yang, Fan; Xiao, Deyun; Shah, Sirish L.
2009-01-01
To improve fault detection reliability, sensor location should be designed according to an optimization criterion with constraints imposed by issues of detectability and identifiability. Reliability requires the minimization of undetectability and false alarm probability due to random factors on sensor readings, which is not only related with sensor readings but also affected by fault propagation. This paper introduces the reliability criteria expression based on the missed/false alarm probability of each sensor and system topology or connectivity derived from the directed graph. The algorithm for the optimization problem is presented as a heuristic procedure. Finally, a boiler system is illustrated using the proposed method. PMID:22291524
NASA Astrophysics Data System (ADS)
Zakynthinaki, M. S.; Barakat, R. O.; Cordente Martínez, C. A.; Sampedro Molinuevo, J.
2011-03-01
The stochastic optimization method ALOPEX IV has been successfully applied to the problem of detecting possible changes in the maternal heart rate kinetics during pregnancy. For this reason, maternal heart rate data were recorded before, during and after gestation, during sessions of exercises of constant mild intensity; ALOPEX IV stochastic optimization was used to calculate the parameter values that optimally fit a dynamical systems model to the experimental data. The results not only demonstrate the effectiveness of ALOPEX IV stochastic optimization, but also have important implications in the area of exercise physiology, as they reveal important changes in the maternal cardiovascular dynamics, as a result of pregnancy.
2009-04-01
active military personnel and veterans, are affected by three major blinding diseases of the retina and optic nerve: diabetic retinopathy , age-related...disease is detected early. New advanced detection methods are available, but are only interpretable by very experienced specialists. The goal of this...consist of several steps [1-3]: feature detection ; transform model estimation; optimization function design; and optimization strategies. We do not
Noncoherent detection of periodic signals
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.
1974-01-01
The optimal Bayes detector for a general periodic waveform having uniform delay and additive white Gaussian noise is examined. It is shown that the detector is much more complex than that for the well known cases of pure sine waves (i.e. classical noncoherent detection) and narrowband signals. An interpretation of the optimal processing is presented, and several implementations are discussed. The results have application to the noncoherent detection of optical square waves.
Optimal joint detection and estimation that maximizes ROC-type curves
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.
2017-01-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
A Self-Aware Machine Platform in Manufacturing Shop Floor Utilizing MTConnect Data
2014-10-02
condition monitoring , and equipment time to failure prediction in manufacturing 1 ANNUAL CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2014 589...Component Level Health Monitoring and Prediction One of the characteristics of a self-aware machine is to be able to detect its components...the annual conference of the prognostics and health management society. Filzmoser, P., Garrett, R. G., & Reimann, C . (2005). Mul- tivariate outlier
Targeting Ovarian Cancer with Porphysome Nanotechnology
2016-10-01
ORGANIZATION: University Health Network Toronto, ON, Canada M5G 2C4 REPORT DATE : October 2016 TYPE OF REPORT: Annual PREPARED FOR: U.S. Army...THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) October 2016 2. REPORT TYPE Annual 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE Targeting...non-targeted Porphysomes for the detection of orthotopic ovarian lesions. Methods : Two ovarian tumour xenograft models are established with human SK
On Monday, September 18, 2017, the second annual Chasing Cancer Summit was held at the Washington Post headquarters in downtown Washington, D.C. The live event brought together a group of experts, including CCR’s Douglas Lowy, M.D., and Nirali Shah, M.D., for discussions on the latest developments in cancer detection and treatment. Read more...
Optimized enrichment for the detection of Escherichia coli O26 in French raw milk cheeses.
Savoye, F; Rozand, C; Bouvier, M; Gleizal, A; Thevenot, D
2011-06-01
Our main objective was to optimize the enrichment of Escherichia coli O26 in raw milk cheeses for their subsequent detection with a new automated immunological method. Ten enrichment broths were tested for the detection of E. coli O26. Two categories of experimentally inoculated raw milk cheeses, semi-hard uncooked cheese and 'Camembert' type cheese, were initially used to investigate the relative efficacy of the different enrichments. The enrichments that were considered optimal for the growth of E. coli O26 in these cheeses were then challenged with other types of raw milk cheeses. Buffered peptone water supplemented with cefixim-tellurite and acriflavin was shown to optimize the growth of E. coli O26 artificially inoculated in the cheeses tested. Despite the low inoculum level (1-10 CFU per 25 g) in the cheeses, E. coli O26 counts reached at least 5.10(4) CFU ml(-1) after 24-h incubation at 41.5 °C in this medium. All the experimentally inoculated cheeses were found positive by the immunological method in the enrichment broth selected. Optimized E. coli O26 enrichment and rapid detection constitute the first steps of a complete procedure that could be used in routine to detect E. coli O26 in raw milk cheeses. © 2011 The Authors. Letters in Applied Microbiology © 2011 The Society for Applied Microbiology.
IEA agreement on the production and utilization of hydrogen: 2000 annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elam, Carolyn C.
2001-12-01
The 2000 annual report of the IEA Hydrogen Agreement contains an overview of the agreement, including its guiding principles, latest strategic plan, and a report from the Chairman, Mr. Neil P. Rossmeissl, U.S. Department of Energy. Overviews of the National Hydrogen Programs of nine member countries are given: Canada, Japan, Lithuania, the Netherlands, Norway, Spain, Sweden, Switzerland, and the United States. Task updates are provided on the following annexes: Annex 12 - Metal Hydrides and Carbon for Hydrogen Storage, Annex 13 - Design and Optimization of Integrated Systems, Annex 14 - Photoelectrolytic Production of Hydrogen, and, Annex 15 - Photobiologicalmore » Production of Hydrogen.« less
Acceleration of Radiance for Lighting Simulation by Using Parallel Computing with OpenCL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Wangda; McNeil, Andrew; Wetter, Michael
2011-09-06
We report on the acceleration of annual daylighting simulations for fenestration systems in the Radiance ray-tracing program. The algorithm was optimized to reduce both the redundant data input/output operations and the floating-point operations. To further accelerate the simulation speed, the calculation for matrix multiplications was implemented using parallel computing on a graphics processing unit. We used OpenCL, which is a cross-platform parallel programming language. Numerical experiments show that the combination of the above measures can speed up the annual daylighting simulations 101.7 times or 28.6 times when the sky vector has 146 or 2306 elements, respectively.
Productivity costs in patients with refractory chronic rhinosinusitis.
Rudmik, Luke; Smith, Timothy L; Schlosser, Rodney J; Hwang, Peter H; Mace, Jess C; Soler, Zachary M
2014-09-01
Disease-specific reductions in patient productivity can lead to substantial economic losses to society. The purpose of this study was to: 1) define the annual productivity cost for a patient with refractory chronic rhinosinusitis (CRS) and 2) evaluate the relationship between degree of productivity cost and CRS-specific characteristics. Prospective, multi-institutional, observational cohort study. The human capital approach was used to define productivity costs. Annual absenteeism, presenteeism, and lost leisure time was quantified to define annual lost productive time (LPT). LPT was monetized using the annual daily wage rates obtained from the 2012 U.S. National Census and the 2013 U.S. Department of Labor statistics. A total of 55 patients with refractory CRS were enrolled. The mean work days lost related to absenteeism and presenteeism were 24.6 and 38.8 days per year, respectively. A total of 21.2 household days were lost per year related to daily sinus care requirements. The overall annual productivity cost was $10,077.07 per patient with refractory CRS. Productivity costs increased with worsening disease-specific QoL (r = 0.440; p = 0.001). Results from this study have demonstrated that the annual productivity cost associated with refractory CRS is $10,077.07 per patient. This substantial cost to society provides a strong incentive to optimize current treatment protocols and continue evaluating novel clinical interventions to reduce this cost. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Productivity Costs in Patients with Refractory Chronic Rhinosinusitis
Rudmik, Luke; Smith, Timothy L.; Schlosser, Rodney J.; Hwang, Peter H.; Mace, Jess C.; Soler, Zachary M.
2014-01-01
Objective Disease-specific reductions in patient productivity can lead to substantial economic losses to society. The purpose of this study was to: 1) define the annual productivity cost for a patient with refractory chronic rhinosinusitis (CRS) and 2) evaluate the relationship between degree of productivity cost and CRS-specific characteristics. Study Design Prospective, multi-institutional, observational cohort study. Methods The human capital approach was used to define productivity costs. Annual absenteeism, presenteeism, and lost leisure time was quantified to define annual lost productive time (LPT). LPT was monetized using the annual daily wage rates obtained from the 2012 US National Census and the 2013 US Department of Labor statistics. Results A total of 55 patients with refractory CRS were enrolled. The mean work days lost related to absenteeism and presenteeism was 24.6 and 38.8 days per year, respectively. A total of 21.2 household days were lost per year related to daily sinus care requirements. The overall annual productivity cost was $10,077.07 per patient with refractory CRS. Productivity costs increased with worsening disease-specific QoL (r=0.440; p=0.001). Conclusion Results from this study have demonstrated that the annual productivity cost associated with refractory CRS is $10,077.07 per patient. This substantial cost to society provides a strong incentive to optimize current treatment protocols and continue evaluating novel clinical interventions to reduce this cost. PMID:24619604
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
NASA Astrophysics Data System (ADS)
Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.
2017-05-01
Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.
The annual cycles of phytoplankton biomass
Winder, M.; Cloern, J.E.
2010-01-01
Terrestrial plants are powerful climate sentinels because their annual cycles of growth, reproduction and senescence are finely tuned to the annual climate cycle having a period of one year. Consistency in the seasonal phasing of terrestrial plant activity provides a relatively low-noise background from which phenological shifts can be detected and attributed to climate change. Here, we ask whether phytoplankton biomass also fluctuates over a consistent annual cycle in lake, estuarine-coastal and ocean ecosystems and whether there is a characteristic phenology of phytoplankton as a consistent phase and amplitude of variability. We compiled 125 time series of phytoplankton biomass (chloro-phyll a concentration) from temperate and subtropical zones and used wavelet analysis to extract their dominant periods of variability and the recurrence strength at those periods. Fewer than half (48%) of the series had a dominant 12-month period of variability, commonly expressed as the canonical spring-bloom pattern. About 20 per cent had a dominant six-month period of variability, commonly expressed as the spring and autumn or winter and summer blooms of temperate lakes and oceans. These annual patterns varied in recurrence strength across sites, and did not persist over the full series duration at some sites. About a third of the series had no component of variability at either the six-or 12-month period, reflecting a series of irregular pulses of biomass. These findings show that there is high variability of annual phytoplankton cycles across ecosystems, and that climate-driven annual cycles can be obscured by other drivers of population variability, including human disturbance, aperiodic weather events and strong trophic coupling between phytoplankton and their consumers. Regulation of phytoplankton biomass by multiple processes operating at multiple time scales adds complexity to the challenge of detecting climate-driven trends in aquatic ecosystems where the noise to signal ratio is high. ?? 2010 The Royal Society.
NASA Astrophysics Data System (ADS)
Tailanián, Matías; Castiglioni, Enrique; Musé, Pablo; Fernández Flores, Germán.; Lema, Gabriel; Mastrángelo, Pedro; Almansa, Mónica; Fernández Liñares, Ignacio; Fernández Liñares, Germán.
2015-10-01
Soybean producers suffer from caterpillar damage in many areas of the world. Estimated average economic losses are annually 500 million USD in Brazil, Argentina, Paraguay and Uruguay. Designing efficient pest control management using selective and targeted pesticide applications is extremely important both from economic and environmental perspectives. With that in mind, we conducted a research program during the 2013-2014 and 2014-2015 planting seasons in a 4,000 ha soybean farm, seeking to achieve early pest detection. Nowadays pest presence is evaluated using manual, labor-intensive counting methods based on sampling strategies which are time consuming and imprecise. The experiment was conducted as follows. Using manual counting methods as ground-truth, a spectrometer capturing reflectance from 400 to 1100 nm was used to measure the reflectance of soy plants. A first conclusion, resulting from measuring the spectral response at leaves level, showed that stress was a property of plants since different leaves with different levels of damage yielded the same spectral response. Then, to assess the applicability of unsupervised classification of plants as healthy, biotic-stressed or abiotic-stressed, feature extraction and selection from leaves spectral signatures, combined with a Supported Vector Machine classifier was designed. Optimization of SVM parameters using grid search with cross-validation, along with classification evaluation by ten-folds cross-validation showed a correct classification rate of 95%, consistently on both seasons. Controlled experiments using cages with different numbers of caterpillars--including caterpillar-free plants--were also conducted to evaluate consistency in trends of the spectral response as well as the extracted features.
Van Wynsberge, Simon; Gilbert, Antoine; Guillemot, Nicolas; Heintz, Tom; Tremblay-Boyer, Laura
2017-07-01
Extensive biological field surveys are costly and time consuming. To optimize sampling and ensure regular monitoring on the long term, identifying informative indicators of anthropogenic disturbances is a priority. In this study, we used 1800 candidate indicators by combining metrics measured from coral, fish, and macro-invertebrate assemblages surveyed from 2006 to 2012 in the vicinity of an ongoing mining project in the Voh-Koné-Pouembout lagoon, New Caledonia. We performed a power analysis to identify a subset of indicators which would best discriminate temporal changes due to a simulated chronic anthropogenic impact. Only 4% of tested indicators were likely to detect a 10% annual decrease of values with sufficient power (>0.80). Corals generally exerted higher statistical power than macro-invertebrates and fishes because of lower natural variability and higher occurrence. For the same reasons, higher taxonomic ranks provided higher power than lower taxonomic ranks. Nevertheless, a number of families of common sedentary or sessile macro-invertebrates and fishes also performed well in detecting changes: Echinometridae, Isognomidae, Muricidae, Tridacninae, Arcidae, and Turbinidae for macro-invertebrates and Pomacentridae, Labridae, and Chaetodontidae for fishes. Interestingly, these families did not provide high power in all geomorphological strata, suggesting that the ability of indicators in detecting anthropogenic impacts was closely linked to reef geomorphology. This study provides a first operational step toward identifying statistically relevant indicators of anthropogenic disturbances in New Caledonia's coral reefs, which can be useful in similar tropical reef ecosystems where little information is available regarding the responses of ecological indicators to anthropogenic disturbances.
Santana, Victor M.; Alday, Josu G.; Lee, HyoHyeMi; Allen, Katherine A.; Marrs, Rob H.
2016-01-01
A present challenge in fire ecology is to optimize management techniques so that ecological services are maximized and C emissions minimized. Here, we modeled the effects of different prescribed-burning rotation intervals and wildfires on carbon emissions (present and future) in British moorlands. Biomass-accumulation curves from four Calluna-dominated ecosystems along a north-south gradient in Great Britain were calculated and used within a matrix-model based on Markov Chains to calculate above-ground biomass-loads and annual C emissions under different prescribed-burning rotation intervals. Additionally, we assessed the interaction of these parameters with a decreasing wildfire return intervals. We observed that litter accumulation patterns varied between sites. Northern sites (colder and wetter) accumulated lower amounts of litter with time than southern sites (hotter and drier). The accumulation patterns of the living vegetation dominated by Calluna were determined by site-specific conditions. The optimal prescribed-burning rotation interval for minimizing annual carbon emissions also differed between sites: the optimal rotation interval for northern sites was between 30 and 50 years, whereas for southern sites a hump-backed relationship was found with the optimal interval either between 8 to 10 years or between 30 to 50 years. Increasing wildfire frequency interacted with prescribed-burning rotation intervals by both increasing C emissions and modifying the optimum prescribed-burning interval for minimum C emission. This highlights the importance of studying site-specific biomass accumulation patterns with respect to environmental conditions for identifying suitable fire-rotation intervals to minimize C emissions. PMID:27880840
Dose estimation and dating of pottery from Turkey
NASA Astrophysics Data System (ADS)
Altay Atlıhan, M.; Şahiner, Eren; Soykal Alanyalı, Feriştah
2012-06-01
The luminescence method is a widely used technique for environmental dosimetry and dating archaeological, geological materials. In this study, equivalent dose (ED) and annual dose rate (AD) of an archaeological sample were measured. The age of the material was calculated by means of equivalent dose divided by the annual dose rate. The archaeological sample was taken from Antalya, Turkey. Samples were prepared by the fine grain technique and equivalent dose was found using multiple-aliquot-additive-dose (MAAD) and single aliquot regeneration (SAR) techniques. Also the short shine normalization-MAAD and long shine normalization-MAAD were applied and the results of the methods were compared with each other. The optimal preheat temperature was found to be 200 °C for 10 min. The annual doses of concentrations of the major radioactive isotopes were determined using a high-purity germanium detector and a low-level alpha counter. The age of the sample was found to be 510±40 years.
Kemper, Judith A; Donahue, Donald A; Harris, Judith S
2003-08-01
A smaller active duty force and an increased operational tempo have made the Reserve components (RC) essential elements in the accomplishment of the mission of the U.S. Army. One critical factor in meeting mission is maintaining the optimal health of each soldier. Baseline health data about the RC is currently not being collected, even though increasing numbers of reserve soldiers are being activated. The Annual Health Certification and Survey is being developed as a way to meet the RCs' statutory requirement for annual certification of health while at the same time generating and tracking baseline data on each reservist in a longitudinal health file, the Health Assessment Longitudinal File. This article discusses the Annual Health Certification Questionnaire/Health Assessment Longitudinal File, which will greatly enhance the Army's ability to accurately certify the health status of the RC and track health in relation to training, mission activities, and deployment.
Recent Results on "Approximations to Optimal Alarm Systems for Anomaly Detection"
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2009-01-01
An optimal alarm system and its approximations may use Kalman filtering for univariate linear dynamic systems driven by Gaussian noise to provide a layer of predictive capability. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. An optimal alarm system can be designed to elicit the fewest false alarms for a fixed detection probability in this particular scenario.
Peterson, S W; Robertson, D; Polf, J
2011-01-01
In this work, we investigate the use of a three-stage Compton camera to measure secondary prompt gamma rays emitted from patients treated with proton beam radiotherapy. The purpose of this study was (1) to develop an optimal three-stage Compton camera specifically designed to measure prompt gamma rays emitted from tissue and (2) to determine the feasibility of using this optimized Compton camera design to measure and image prompt gamma rays emitted during proton beam irradiation. The three-stage Compton camera was modeled in Geant4 as three high-purity germanium detector stages arranged in parallel-plane geometry. Initially, an isotropic gamma source ranging from 0 to 15 MeV was used to determine lateral width and thickness of the detector stages that provided the optimal detection efficiency. Then, the gamma source was replaced by a proton beam irradiating a tissue phantom to calculate the overall efficiency of the optimized camera for detecting emitted prompt gammas. The overall calculated efficiencies varied from ~10−6 to 10−3 prompt gammas detected per proton incident on the tissue phantom for several variations of the optimal camera design studied. Based on the overall efficiency results, we believe it feasible that a three-stage Compton camera could detect a sufficient number of prompt gammas to allow measurement and imaging of prompt gamma emission during proton radiotherapy. PMID:21048295
Detecting glaucomatous change in visual fields: Analysis with an optimization framework.
Yousefi, Siamak; Goldbaum, Michael H; Varnousfaderani, Ehsan S; Belghith, Akram; Jung, Tzyy-Ping; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher
2015-12-01
Detecting glaucomatous progression is an important aspect of glaucoma management. The assessment of longitudinal series of visual fields, measured using Standard Automated Perimetry (SAP), is considered the reference standard for this effort. We seek efficient techniques for determining progression from longitudinal visual fields by formulating the problem as an optimization framework, learned from a population of glaucoma data. The longitudinal data from each patient's eye were used in a convex optimization framework to find a vector that is representative of the progression direction of the sample population, as a whole. Post-hoc analysis of longitudinal visual fields across the derived vector led to optimal progression (change) detection. The proposed method was compared to recently described progression detection methods and to linear regression of instrument-defined global indices, and showed slightly higher sensitivities at the highest specificities than other methods (a clinically desirable result). The proposed approach is simpler, faster, and more efficient for detecting glaucomatous changes, compared to our previously proposed machine learning-based methods, although it provides somewhat less information. This approach has potential application in glaucoma clinics for patient monitoring and in research centers for classification of study participants. Copyright © 2015 Elsevier Inc. All rights reserved.
Heliostat cost optimization study
NASA Astrophysics Data System (ADS)
von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus
2016-05-01
This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.
A Methodology for the Optimization of Disaggregated Space System Conceptual Designs
2015-06-18
orbit disaggregated space systems. Savings of $82 million are identified for an optimized fire detection system. Savings of $5.7 billion are...solutions and update architecture ................................................................31 Fire detection problem...149 Figure 30 – Example cost vs. weighted mean science return output [37] ...................... 153 Figure 31
USDA-ARS?s Scientific Manuscript database
A multi-spectral fluorescence imaging technique was used to detect defect cherry tomatoes. The fluorescence excitation and emission matrix was used to measure for defects, sound surface, and stem areas to determine the optimal fluorescence excitation and emission wavelengths for discrimination. Two-...
Detecting insect pollinator declines on regional and global scales
Lubuhn, Gretchen; Droege, Sam; Connor, Edward F.; Gemmill-Herren, Barbara; Potts, Simon G.; Minckley, Robert L.; Griswold, Terry; Jean, Robert; Kula, Emanuel; Roubik, David W.; Cane, Jim; Wright, Karen W.; Frankie, Gordon; Parker, Frank
2013-01-01
Recently there has been considerable concern about declines in bee communities in agricultural and natural habitats. The value of pollination to agriculture, provided primarily by bees, is >$200 billion/year worldwide, and in natural ecosystems it is thought to be even greater. However, no monitoring program exists to accurately detect declines in abundance of insect pollinators; thus, it is difficult to quantify the status of bee communities or estimate the extent of declines. We used data from 11 multiyear studies of bee communities to devise a program to monitor pollinators at regional, national, or international scales. In these studies, 7 different methods for sampling bees were used and bees were sampled on 3 different continents. We estimated that a monitoring program with 200-250 sampling locations each sampled twice over 5 years would provide sufficient power to detect small (2-5%) annual declines in the number of species and in total abundance and would cost U.S.$2,000,000. To detect declines as small as 1% annually over the same period would require >300 sampling locations. Given the role of pollinators in food security and ecosystem function, we recommend establishment of integrated regional and international monitoring programs to detect changes in pollinator communities.
Foddai, A C G; Grant, I R
2017-05-01
To validate an optimized peptide-mediated magnetic separation (PMS)-phage assay for detection of viable Mycobacterium avium subsp. paratuberculosis (MAP) in milk. Inclusivity, specificity and limit of detection 50% (LOD 50 ) of the optimized PMS-phage assay were assessed. Plaques were obtained for all 43 MAP strains tested. Of 12 other Mycobacterium sp. tested, only Mycobacterium bovis BCG produced small numbers of plaques. LOD 50 of the PMS-phage assay was 0·93 MAP cells per 50 ml milk, which was better than both PMS-qPCR and PMS-culture. When individual milks (n = 146) and bulk tank milk (BTM, n = 22) obtained from Johne's affected herds were tested by the PMS-phage assay, viable MAP were detected in 31 (21·2%) of 146 individual milks and 13 (59·1%) of 22 BTM, with MAP numbers detected ranging from 6-948 plaque-forming-units per 50 ml milk. PMS-qPCR and PMS-MGIT culture proved to be less sensitive tests than the PMS-phage assay. The optimized PMS-phage assay is the most sensitive and specific method available for the detection of viable MAP in milk. Further work is needed to streamline the PMS-phage assay, because the assay's multistep format currently makes it unsuitable for adoption by the dairy industry as a screening test. The inclusivity (ability to detect all MAP strains), specificity (ability to detect only MAP) and detection sensitivity (ability to detect low numbers of MAP) of the optimized PMS-phage assay have been comprehensively demonstrated for the first time. © 2017 The Society for Applied Microbiology.
Director, Operational Test and Evaluation FY 2004 Annual Report
2004-01-01
HIGH) Space Based Radar (SBR) Sensor Fuzed Weapon (SFW) P3I (CBU-97/B) Small Diameter Bomb (SDB) Secure Mobile Anti-Jam Reliable Tactical Terminal...detection, identification, and sampling capability for both fixed-site and mobile operations. The system must automatically detect and identify up to ten...staffing within the Services. SYSTEM DESCRIPTION AND MISSION The Services envision JCAD as a hand-held device that automatically detects, identifies, and
Ullah, Sami; Daud, Hanita; Dass, Sarat C; Khan, Habib Nawaz; Khalil, Alamgir
2017-11-06
Ability to detect potential space-time clusters in spatio-temporal data on disease occurrences is necessary for conducting surveillance and implementing disease prevention policies. Most existing techniques use geometrically shaped (circular, elliptical or square) scanning windows to discover disease clusters. In certain situations, where the disease occurrences tend to cluster in very irregularly shaped areas, these algorithms are not feasible in practise for the detection of space-time clusters. To address this problem, a new algorithm is proposed, which uses a co-clustering strategy to detect prospective and retrospective space-time disease clusters with no restriction on shape and size. The proposed method detects space-time disease clusters by tracking the changes in space-time occurrence structure instead of an in-depth search over space. This method was utilised to detect potential clusters in the annual and monthly malaria data in Khyber Pakhtunkhwa Province, Pakistan from 2012 to 2016 visualising the results on a heat map. The results of the annual data analysis showed that the most likely hotspot emerged in three sub-regions in the years 2013-2014. The most likely hotspots in monthly data appeared in the month of July to October in each year and showed a strong periodic trend.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez-Cardona, D; Li, K; Lubner, M G
Purpose: The introduction of the highly nonlinear MBIR algorithm to clinical CT systems has made CNR an invalid metric for kV optimization. The purpose of this work was to develop a task-based framework to unify kV and mAs optimization for both FBP- and MBIR-based CT systems. Methods: The kV-mAs optimization was formulated as a constrained minimization problem: to select kV and mAs to minimize dose under the constraint of maintaining the detection performance as clinically prescribed. To experimentally solve this optimization problem, exhaustive measurements of detectability index (d’) for a hepatic lesion detection task were performed at 15 different mAmore » levels and 4 kV levels using an anthropomorphic phantom. The measured d’ values were used to generate an iso-detectability map; similarly, dose levels recorded at different kV-mAs combinations were used to generate an iso-dose map. The iso-detectability map was overlaid on top of the iso-dose map so that for a prescribed detectability level d’, the optimal kV-mA can be determined from the crossing between the d’ contour and the dose contour that corresponds to the minimum dose. Results: Taking d’=16 as an example: the kV-mAs combinations on the measured iso-d’ line of MBIR are 80–150 (3.8), 100–140 (6.6), 120–150 (11.3), and 140–160 (17.2), where values in the parentheses are measured dose values. As a Result, the optimal kV was 80 and optimal mA was 150. In comparison, the optimal kV and mA for FBP were 100 and 500, which corresponded to a dose level of 24 mGy. Results of in vivo animal experiments were consistent with the phantom results. Conclusion: A new method to optimize kV and mAs selection has been developed. This method is applicable to both linear and nonlinear CT systems such as those using MBIR. Additional dose savings can be achieved by combining MBIR with this method. This work was partially supported by an NIH grant R01CA169331 and GE Healthcare. K. Li, D. Gomez-Cardona, M. G. Lubner: Nothing to disclose. P. J. Pickhardt: Co-founder, VirtuoCTC, LLC Stockholder, Cellectar Biosciences, Inc. G.-H. Chen: Research funded, GE Healthcare; Research funded, Siemens AX.« less
Soil-test N recommendations augmented with PEST optimized RZWQM simulations
USDA-ARS?s Scientific Manuscript database
Fertilizer application rates based on soil nitrate tests may help reduce N loss to the environment. The late spring nitrate test (LSNT) is one such test and improving our understanding of year-to-year N rate differences may increase its use. It is known that annual plant available soil N is associat...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro; Trujillo, Susie
During calendar year 2017, Sandia National Laboratories (SNL) made strides towards developing an open portable design platform rich in highperformance computing (HPC) enabled modeling, analysis and synthesis tools. The main focus was to lay the foundations of the core interfaces that will enable plug-n-play insertion of synthesis optimization technologies in the areas of modeling, analysis and synthesis.
Evaluating spatially explicit burn probabilities for strategic fire management planning
C. Miller; M.-A. Parisien; A. A. Ager; M. A. Finney
2008-01-01
Spatially explicit information on the probability of burning is necessary for virtually all strategic fire and fuels management planning activities, including conducting wildland fire risk assessments, optimizing fuel treatments, and prevention planning. Predictive models providing a reliable estimate of the annual likelihood of fire at each point on the landscape have...
Millimeter wave satellite concepts. Volume 2: Technical report
NASA Technical Reports Server (NTRS)
Hilsen, N. B.; Holland, L. D.; Wallace, R. W.; Kelly, D. L.; Thomas, R. R.; Vogler, F. H.
1979-01-01
Identification of technologies for millimeter satellite communication systems, and assessment of the relative risks of these technologies, were accomplished through subsystem modeling and link optimization for both point-to-point and broadcast applications. The results, in terms of annual cost per channel to the user from a commercial view point, are described.
A joint economic lot-sizing problem with fuzzy demand, defective items and environmental impacts
NASA Astrophysics Data System (ADS)
Jauhari, W. A.; Laksono, P. W.
2017-11-01
In this paper, a joint economic lot-sizing problem consisting of a vendor and a buyer was proposed. A buyer ordered products from a vendor to fulfill end customer’s demand. A produced a batch of products, and delivered it to the buyer. The production process in the vendor was imperfect and produced a number of defective products. Production rate was assumed to be adjustable to control the output of vendor’s production. A continuous review policy was adopted by the buyer to manage his inventory level. In addition, an average annual demand was considered to be fuzzy rather than constant. The proposed model contributed to the current inventory literature by allowing the inclusion of fuzzy annual demand, imperfect production emission cost, and adjustable production rate. The proposed model also considered carbon emission cost which was resulted from the transportation activity. A mathematical model was developed for obtaining the optimal ordering quantity, safety factor and the number of deliveries so the joint total cost was minimized. Furthermore, an iterative procedure was suggested to determine the optimal solutions.
Global Surface Temperature Change and Uncertainties Since 1861
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M. (Technical Monitor)
2002-01-01
The objective of this talk is to analyze the warming trend and its uncertainties of the global and hemi-spheric surface temperatures. By the method of statistical optimal averaging scheme, the land surface air temperature and sea surface temperature observational data are used to compute the spatial average annual mean surface air temperature. The optimal averaging method is derived from the minimization of the mean square error between the true and estimated averages and uses the empirical orthogonal functions. The method can accurately estimate the errors of the spatial average due to observational gaps and random measurement errors. In addition, quantified are three independent uncertainty factors: urbanization, change of the in situ observational practices and sea surface temperature data corrections. Based on these uncertainties, the best linear fit to annual global surface temperature gives an increase of 0.61 +/- 0.16 C between 1861 and 2000. This lecture will also touch the topics on the impact of global change on nature and environment. as well as the latest assessment methods for the attributions of global change.
Optimal Sizing of Energy Storage for Community Microgrids Considering Building Thermal Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Li, Zhi; Starke, Michael R.
This paper proposes an optimization model for the optimal sizing of energy storage in community microgrids considering the building thermal dynamics and customer comfort preference. The proposed model minimizes the annualized cost of the community microgrid, including energy storage investment, purchased energy cost, demand charge, energy storage degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation. The decision variables are the power and energy capacity of invested energy storage. In particular, we assume the heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently by the microgrid central controller while maintainingmore » the indoor temperature in the comfort range set by customers. For this purpose, the detailed thermal dynamic characteristics of buildings have been integrated into the optimization model. Numerical simulation shows significant cost reduction by the proposed model. The impacts of various costs on the optimal solution are investigated by sensitivity analysis.« less
Optimal Detection Range of RFID Tag for RFID-based Positioning System Using the k-NN Algorithm.
Han, Soohee; Kim, Junghwan; Park, Choung-Hwan; Yoon, Hee-Cheon; Heo, Joon
2009-01-01
Positioning technology to track a moving object is an important and essential component of ubiquitous computing environments and applications. An RFID-based positioning system using the k-nearest neighbor (k-NN) algorithm can determine the position of a moving reader from observed reference data. In this study, the optimal detection range of an RFID-based positioning system was determined on the principle that tag spacing can be derived from the detection range. It was assumed that reference tags without signal strength information are regularly distributed in 1-, 2- and 3-dimensional spaces. The optimal detection range was determined, through analytical and numerical approaches, to be 125% of the tag-spacing distance in 1-dimensional space. Through numerical approaches, the range was 134% in 2-dimensional space, 143% in 3-dimensional space.
Optimization of Collision Detection in Surgical Simulations
NASA Astrophysics Data System (ADS)
Custură-Crăciun, Dan; Cochior, Daniel; Neagu, Corneliu
2014-11-01
Just like flight and spaceship simulators already represent a standard, we expect that soon enough, surgical simulators should become a standard in medical applications. A simulations quality is strongly related to the image quality as well as the degree of realism of the simulation. Increased quality requires increased resolution, increased representation speed but more important, a larger amount of mathematical equations. To make it possible, not only that we need more efficient computers, but especially more calculation process optimizations. A simulator executes one of the most complex sets of calculations each time it detects a contact between the virtual objects, therefore optimization of collision detection is fatal for the work-speed of a simulator and hence in its quality
Interface design for CMOS-integrated Electrochemical Impedance Spectroscopy (EIS) biosensors.
Manickam, Arun; Johnson, Christopher Andrew; Kavusi, Sam; Hassibi, Arjang
2012-10-29
Electrochemical Impedance Spectroscopy (EIS) is a powerful electrochemical technique to detect biomolecules. EIS has the potential of carrying out label-free and real-time detection, and in addition, can be easily implemented using electronic integrated circuits (ICs) that are built through standard semiconductor fabrication processes. This paper focuses on the various design and optimization aspects of EIS ICs, particularly the bio-to-semiconductor interface design. We discuss, in detail, considerations such as the choice of the electrode surface in view of IC manufacturing, surface linkers, and development of optimal bio-molecular detection protocols. We also report experimental results, using both macro- and micro-electrodes to demonstrate the design trade-offs and ultimately validate our optimization procedures.
NASA Technical Reports Server (NTRS)
Christian, Hugh J.; Blakeslee, Richard J.; Boccippio, Dennis J.; Boeck, William L.; Bucchler, Dennis E.; Driscoll, Kevin T.; Goodman, Steven J.; Hall, John M.; Koshak, William J.; Mach, Douglas M.;
2002-01-01
The Optical Transient Detector (OTD) is a space-based instrument specifically designed to detect and locate lightning discharges as it orbits the Earth. This instrument is a scientific payload on the MicroLab-1 satellite that was launched into a low-earth, 70 deg. inclination orbit in April 1995. Given the orbital trajectory of the satellite, most regions of the earth are observed by the OTD instrument more than 400 times during a one year period, and the average duration of each observation is 2 minutes. The OTD instrument optically detects lightning flashes that occur within its 1300x1300 sq km field-of-view during both day and night conditions. A statistical examination of OTD lightning data reveals that nearly 1.4 billion flashes occur annually over the entire earth. This annual flash count translates to an average of 44 +/- 5 lightning flashes (intracloud and cloud-to-ground combined) occurring around the globe every second, which is well below the traditional estimate of 100 flashes per second that was derived in 1925 from world thunder-day records. The range of uncertainty for the OTD global totals represents primarily the uncertainty (and variability) in the flash detection efficiency of the instrument. The OTD measurements have been used to construct lightning climatology maps that demonstrate the geographical and seasonal distribution of lightning activity for the globe. An analysis of this annual lightning distribution confirms that lightning occurs mainly over land areas, with an average land:ocean ratio of 10:1. A dominant Northern Hemisphere summer peak occurs in the annual cycle, and evidence is found for a tropically-driven semiannual cycle.
Optimization of Water Resources and Agricultural Activities for Economic Benefit in Colorado
NASA Astrophysics Data System (ADS)
LIM, J.; Lall, U.
2017-12-01
The limited water resources available for irrigation are a key constraint for the important agricultural sector of Colorado's economy. As climate change and groundwater depletion reshape these resources, it is essential to understand the economic potential of water resources under different agricultural production practices. This study uses a linear programming optimization at the county spatial scale and annual temporal scales to study the optimal allocation of water withdrawal and crop choices. The model, AWASH, reflects streamflow constraints between different extraction points, six field crops, and a distinct irrigation decision for maize and wheat. The optimized decision variables, under different environmental, social, economic, and physical constraints, provide long-term solutions for ground and surface water distribution and for land use decisions so that the state can generate the maximum net revenue. Colorado, one of the largest agricultural producers, is tested as a case study and the sensitivity on water price and on climate variability is explored.
Temporal Causal Diagrams for Diagnosing Failures in Cyber Physical Systems
2014-10-02
11 P Open Close C Close none St Close Table 3. Transition Information for Distance Relay’s behavioral model. Rows 1-7 deal with the anomaly detection ... PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2014 238 ANNUAL CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2014 fall into the Zone settings of...OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2014 239 ANNUAL CONFERENCE OF THE PROGNOSTICS AND HEALTH MANAGEMENT SOCIETY 2014 event systems has
2008-06-01
Pathology, Society of Surgical Oncologists Annual Meeting, Los Angeles, CA, March 2003. 15. Optical Imaging for Minimally Invasive Medical Diagnosis...talk, CINT Annual Workshop, Los Alamos National Laboratory/Sandia National Laboratory. 70. “Plasmonic Nanoparticles: Molecular Orbitals writ large...Surgical Research, Fort Sam Houston, San Antonio, TX 08/28/07 113. USC Grand Rounds, USC Norris Cancer Center, Los Angeles, CA 09/10/07-09/11/07 114
Li, Baoguang; Liu, Huanli; Wang, Weimin
2017-11-09
Shiga toxin-producing Escherichia coli (STEC), including E. coli O157:H7, are responsible for numerous foodborne outbreaks annually worldwide. E. coli O157:H7, as well as pathogenic non-O157:H7 STECs, can cause life-threating complications, such as bloody diarrhea (hemolytic colitis) and hemolytic-uremic syndrome (HUS). Previously, we developed a real-time PCR assay to detect E. coli O157:H7 in foods by targeting a unique putative fimbriae protein Z3276. To extend the detection spectrum of the assay, we report a multiplex real-time PCR assay to specifically detect E. coli O157:H7 and screen for non-O157 STEC by targeting Z3276 and Shiga toxin genes (stx1 and stx2). Also, an internal amplification control (IAC) was incorporated into the assay to monitor the amplification efficiency. The multiplex real-time PCR assay was developed using the Life Technology ABI 7500 System platform and the standard chemistry. The optimal amplification mixture of the assay contains 12.5 μl of 2 × Universal Master Mix (Life Technology), 200 nM forward and reverse primers, appropriate concentrations of four probes [(Z3276 (80 nM), stx1 (80 nM), stx2 (20 nM), and IAC (40 nM)], 2 μl of template DNA, and water (to make up to 25 μl in total volume). The amplification conditions of the assay were set as follows: activation of TaqMan at 95 °C for 10 min, then 40 cycles of denaturation at 95 °C for 10 s and annealing/extension at 60 °C for 60 s. The multiplex assay was optimized for amplification conditions. The limit of detection (LOD) for the multiplex assay was determined to be 200 fg of bacterial DNA, which is equivalent to 40 CFU per reaction which is similar to the LOD generated in single targeted PCRs. Inclusivity and exclusivity determinants were performed with 196 bacterial strains. All E. coli O157:H7 (n = 135) were detected as positive and all STEC strains (n = 33) were positive for stx1, or stx2, or stx1 and stx2 (Table 1). No cross reactivity was detected with Salmonella enterica, Shigella strains, or any other pathogenic strains tested. A multiplex real-time PCR assay that can rapidly and simultaneously detect E. coli O157:H7 and screen for non-O157 STEC strains has been developed and assessed for efficacy. The inclusivity and exclusivity tests demonstrated high sensitivity and specificity of the multiplex real-time PCR assay. In addition, this multiplex assay was shown to be effective for the detection of E. coli O157:H7 from two common food matrices, beef and spinach, and may be applied for detection of E. coli O157:H7 and screening for non-O157 STEC strains from other food matrices as well.
48 CFR 3.502-2 - Subcontractor kickbacks.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the consequences of detection; procurement procedures to minimize the opportunity for kickbacks; audit... gifts or gratuities received from subcontractors; annual employee declarations that they have violated...
Acute respiratory infections among returning Hajj pilgrims-Jordan, 2014.
Al-Abdallat, Mohammad Mousa; Rha, Brian; Alqasrawi, Sultan; Payne, Daniel C; Iblan, Ibrahim; Binder, Alison M; Haddadin, Aktham; Nsour, Mohannad Al; Alsanouri, Tarek; Mofleh, Jawad; Whitaker, Brett; Lindstrom, Stephen L; Tong, Suxiang; Ali, Sami Sheikh; Dahl, Rebecca Moritz; Berman, LaShondra; Zhang, Jing; Erdman, Dean D; Gerber, Susan I
2017-04-01
The emergence of Middle East Respiratory Syndrome coronavirus (MERS-CoV) has prompted enhanced surveillance for respiratory infections among pilgrims returning from the Hajj, one of the largest annual mass gatherings in the world. To describe the epidemiology and etiologies of respiratory illnesses among pilgrims returning to Jordan after the 2014 Hajj. Surveillance for respiratory illness among pilgrims returning to Jordan after the 2014 Hajj was conducted at sentinel health care facilities using epidemiologic surveys and molecular diagnostic testing of upper respiratory specimens for multiple respiratory pathogens, including MERS-CoV. Among the 125 subjects, 58% tested positive for at least one virus; 47% tested positive for rhino/enterovirus. No cases of MERS-CoV were detected. The majority of pilgrims returning to Jordan from the 2014 Hajj with respiratory illness were determined to have a viral etiology, but none were due to MERS-CoV. A greater understanding of the epidemiology of acute respiratory infections among returning travelers to other countries after Hajj should help optimize surveillance systems and inform public health response practices. Published by Elsevier B.V.
Sharma, Gaurav K; Mahajan, Sonalika; Matura, Rakesh; Subramaniam, Saravanan; Mohapatra, Jajati K; Pattnaik, Bramhadev
2014-11-01
Differentiation of Foot-and-Mouth Disease infected from vaccinated animals is essential for effective implementation of vaccination based control programme. Detection of antibodies against 3ABC non-structural protein of FMD virus by immunodiagnostic assays provides reliable indication of FMD infection. Sero-monitoring of FMD in the large country like India is a big task where thousands of serum samples are annually screened. Currently, monoclonal or polyclonal antibodies are widely used in these immunodiagnostic assays. Considering the large population of livestock in the country, an economical and replenishable alternative of these antibodies was required. In this study, specific short chain variable fragment (scFv) antibody against 3B region of 3ABC poly-protein was developed. High level of scFv expression in Escherichia coli system was obtained by careful optimization in four different strains. Two formats of enzyme immunoassays (sandwich and competitive ELISAs) were optimized using scFv with objective to differentiate FMD infected among the vaccinated population. The assays were statistically validated by testing 2150 serum samples. Diagnostic sensitivity/specificity of sandwich and competitive ELISAs were determined by ROC method as 92.2%/95.5% and 89.5%/93.5%, respectively. This study demonstrated that scFv is a suitable alternate for immunodiagnosis of FMD on large scale. Copyright © 2014 The International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.
Optimizing liquid effluent monitoring at a large nuclear complex.
Chou, Charissa J; Barnett, D Brent; Johnson, Vernon G; Olson, Phil M
2003-12-01
Effluent monitoring typically requires a large number of analytes and samples during the initial or startup phase of a facility. Once a baseline is established, the analyte list and sampling frequency may be reduced. Although there is a large body of literature relevant to the initial design, few, if any, published papers exist on updating established effluent monitoring programs. This paper statistically evaluates four years of baseline data to optimize the liquid effluent monitoring efficiency of a centralized waste treatment and disposal facility at a large defense nuclear complex. Specific objectives were to: (1) assess temporal variability in analyte concentrations, (2) determine operational factors contributing to waste stream variability, (3) assess the probability of exceeding permit limits, and (4) streamline the sampling and analysis regime. Results indicated that the probability of exceeding permit limits was one in a million under normal facility operating conditions, sampling frequency could be reduced, and several analytes could be eliminated. Furthermore, indicators such as gross alpha and gross beta measurements could be used in lieu of more expensive specific isotopic analyses (radium, cesium-137, and strontium-90) for routine monitoring. Study results were used by the state regulatory agency to modify monitoring requirements for a new discharge permit, resulting in an annual cost savings of US dollars 223,000. This case study demonstrates that statistical evaluation of effluent contaminant variability coupled with process knowledge can help plant managers and regulators streamline analyte lists and sampling frequencies based on detection history and environmental risk.
Johnson, Matthew J.; Holmes, Jennifer A.; Calvo, Christopher; Samuels, Ivan; Krantz, Stefani; Sogge, Mark K.
2007-01-01
Executive Summary This 2006 annual report details the first season of a 2-year study documenting western yellow-billed cuckoo (Coccyzus americanus occidentalis) distribution, abundance, and habitat use throughout the Lower Colorado River Multi-Species Conservation Plan boundary area. We conducted cuckoo surveys at 55 sites within 17 areas, between 11 June and 13 September. The 243 visits across all sites yielded 180 yellow-billed cuckoo detections. Cuckoos were detected at 27 of the 55 sites, primarily at the Bill Williams River National Wildlife Refuge AZ sites (n = 117 detections) and the Grand Canyon National Park-Lake Mead National Recreation Area AZ delta sites (n = 29 detections). There were also cuckoos at the Gila River-Colorado River Confluence, AZ (n = 9), Overton Wildlife Management, NV area (n = 7), and Limitrophe Division North, AZ (n = 6); however, at these sites the numbers were much lower and very few of these birds were considered to be paired or breeding. The greatest number of detections (n = 79) occurred during the second survey period (3-23 July). In 2006, we confirmed five breeding events, including one nesting observation and sightings of four juveniles; all confirmed breeding was at the Bill Williams River NWR and Grand Canyon NP-Lake Mead NRA delta sites. The breeding status of most of our detections were unknown, however, we observed 17 adult cuckoos carrying nest material or food and 40 cuckoo detections were detected while counter-calling occurred in same area during repeated surveys. We used playback recordings to survey for western yellow-billed cuckoos. Compared to simple point counts or surveys, this method increases the number of detections of this secretive, elusive species. It has long been suspected that cuckoos have a fairly low response rate, and that the standard survey method of using playback recordings may fail to detect all birds present in an area. In 2006, we found that the majority (72%) of cuckoo detections were solicited through playback at all study sites. The number of solicited detections peaked during the first half of July and then declined as the breeding season progressed, while the number of unsolicited detections (cuckoos heard calling before playback was initiated) remained fairly constant. The majority (64%) of cuckoo detections, solicited or unsolicited, were aural; 27 percent were both heard and seen and nine percent were visual detections only. Cuckoos in areas with the largest populations had the highest rate of vocalizations before playback or after the first broadcast. In contrast, more than half the responses at sites with fewer cuckoos (with < 10 detections per site) first occurred after three or more playback recordings. This type of baseline information will be used to help refine the survey protocol for 2007, and to create hypotheses that can serve as the foundation for a full-scale evaluation and optimization of this survey technique. Our preliminary analysis of vegetation data from occupied and unoccupied sites in 2006 focused on general patterns in the distribution and abundance of woody species. The density and composition of woody riparian vegetation varied considerably among the study areas. Much of the variation in tree density was due to the patterns of abundance of trees in the smallest size class (< 8 cm dbh). The dominant tree species at the cuckoo survey sites were cottonwood, willow, and tamarisk. Tamarisk was the most common tree, due to the abundance of small (< 8 cm dbh) individuals. When occupied and unoccupied sites were compared, occupied sites tended to have higher average canopy cover, attributable to higher average cover of the mid and low canopy. The dominant canopy at occupied sites most often consisted of cottonwood or willow trees. In addition, occupied sites in most areas had lower than average total tree density whereas unoccupied sites were denser than average. When densities of trees in different size classes were com
Detection of proximal caries using digital radiographic systems with different resolutions.
Nikneshan, Sima; Abbas, Fatemeh Mashhadi; Sabbagh, Sedigheh
2015-01-01
Dental radiography is an important tool for detection of caries and digital radiography is the latest advancement in this regard. Spatial resolution is a characteristic of digital receptors used for describing the quality of images. This study was aimed to compare the diagnostic accuracy of two digital radiographic systems with three different resolutions for detection of noncavitated proximal caries. Diagnostic accuracy. Seventy premolar teeth were mounted in 14 gypsum blocks. Digora; Optime and RVG Access were used for obtaining digital radiographs. Six observers evaluated the proximal surfaces in radiographs for each resolution in order to determine the depth of caries based on a 4-point scale. The teeth were then histologically sectioned, and the results of histologic analysis were considered as the gold standard. Data were entered using SPSS version 18 software and the Kruskal-Wallis test was used for data analysis. P <0.05 was considered as statistically significant. No significant difference was found between different resolutions for detection of proximal caries (P > 0.05). RVG access system had the highest specificity (87.7%) and Digora; Optime at high resolution had the lowest specificity (84.2%). Furthermore, Digora; Optime had higher sensitivity for detection of caries exceeding outer half of enamel. Judgment of oral radiologists for detection of the depth of caries had higher reliability than that of restorative dentistry specialists. The three resolutions of Digora; Optime and RVG access had similar accuracy in detection of noncavitated proximal caries.
Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming
2017-01-01
Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS. PMID:28728470
Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming
2018-01-01
Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS.
Bagdonienė, Indrė; Baležentienė, Ligita
2013-01-01
Experimental data were applied for the modelling optimal cowshed temperature environment in laboratory test bench by a mass-flow method. The principal factor affecting exponent growth of ammonia emission was increasing air and manure surface temperature. With the manure temperature increasing from 4°C to 30°C, growth in the ammonia emission grew fourfold, that is, from 102 to 430 mg m−2h−1. Especial risk emerges when temperature exceeds 20°C: an increase in temperature of 1°C contributes to the intensity of ammonia emission by 17 mg m−2h−1. The temperatures of air and manure surface as well as those of its layers are important when analysing emission processes from manure. Indeed, it affects the processes occurring on the manure surface, namely, dehydration and crust formation. To reduce ammonia emission from cowshed, it is important to optimize the inner temperature control and to manage air circulation, especially at higher temperatures, preventing the warm ambient air from blowing direct to manure. Decrease in mean annual temperature of 1°C would reduce the annual ammonia emission by some 5.0%. The air temperature range varied between −15°C and 30°C in barns. The highest mean annual temperature (14.6°C) and ammonia emission (218 mg m−2h−1) were observed in the semideep cowshed. PMID:24453912
Lean methodology in i.v. medication processes in a children's hospital.
L'Hommedieu, Timothy; Kappeler, Karl
2010-12-15
The impact of lean methodology in i.v. medication processes in a children's hospital was studied. Medication orders at a children's hospital were analyzed for 30 days to identify the specific times when most medications were changed or discontinued. Value-stream mapping was used to define the current state of preparation and identify non-value-added tasks in the i.v. medication preparation and dispensing processes. An optimization model was created using specific measurements to establish the optimal number of batches and batch preparation times of batches. Returned i.v. medications were collected for 7 days before and after implementation of the lean process to determine the impact of the lean process changes. Patient-days increased from 1,836 during the first collection period to 2,017 during the second, and the total number of i.v. doses dispensed increased from 8,054 to 9,907. Wasted i.v. doses decreased from 1,339 (16.6% of the total doses dispensed) to 853 (8.6%). With the new process, Nationwide Children's Hospital was projected to realize a weekly savings of $8,197 ($426,244 annually), resulting in a 2.6% reduction in annual drug expenditure. The annual savings is a conservative estimate, due to the 10% increase in patient-days after the lean collection period compared with baseline. The differences in wasted doses and their costs were significant (p < 0.05). Implementing lean concepts in the i.v. medication preparation process had a positive effect on efficiency and drug cost.
Frequencies of decision making and monitoring in adaptive resource management
Johnson, Fred A.
2017-01-01
Adaptive management involves learning-oriented decision making in the presence of uncertainty about the responses of a resource system to management. It is implemented through an iterative sequence of decision making, monitoring and assessment of system responses, and incorporating what is learned into future decision making. Decision making at each point is informed by a value or objective function, for example total harvest anticipated over some time frame. The value function expresses the value associated with decisions, and it is influenced by system status as updated through monitoring. Often, decision making follows shortly after a monitoring event. However, it is certainly possible for the cadence of decision making to differ from that of monitoring. In this paper we consider different combinations of annual and biennial decision making, along with annual and biennial monitoring. With biennial decision making decisions are changed only every other year; with biennial monitoring field data are collected only every other year. Different cadences of decision making combine with annual and biennial monitoring to define 4 scenarios. Under each scenario we describe optimal valuations for active and passive adaptive decision making. We highlight patterns in valuation among scenarios, depending on the occurrence of monitoring and decision making events. Differences between years are tied to the fact that every other year a new decision can be made no matter what the scenario, and state information is available to inform that decision. In the subsequent year, however, in 3 of the 4 scenarios either a decision is repeated or monitoring does not occur (or both). There are substantive differences in optimal values among the scenarios, as well as the optimal policies producing those values. Especially noteworthy is the influence of monitoring cadence on valuation in some years. We highlight patterns in policy and valuation among the scenarios, and discuss management implications and extensions. PMID:28800591
Frequencies of decision making and monitoring in adaptive resource management
Williams, Byron K.; Johnson, Fred A.
2017-01-01
Adaptive management involves learning-oriented decision making in the presence of uncertainty about the responses of a resource system to management. It is implemented through an iterative sequence of decision making, monitoring and assessment of system responses, and incorporating what is learned into future decision making. Decision making at each point is informed by a value or objective function, for example total harvest anticipated over some time frame. The value function expresses the value associated with decisions, and it is influenced by system status as updated through monitoring. Often, decision making follows shortly after a monitoring event. However, it is certainly possible for the cadence of decision making to differ from that of monitoring. In this paper we consider different combinations of annual and biennial decision making, along with annual and biennial monitoring. With biennial decision making decisions are changed only every other year; with biennial monitoring field data are collected only every other year. Different cadences of decision making combine with annual and biennial monitoring to define 4 scenarios. Under each scenario we describe optimal valuations for active and passive adaptive decision making. We highlight patterns in valuation among scenarios, depending on the occurrence of monitoring and decision making events. Differences between years are tied to the fact that every other year a new decision can be made no matter what the scenario, and state information is available to inform that decision. In the subsequent year, however, in 3 of the 4 scenarios either a decision is repeated or monitoring does not occur (or both). There are substantive differences in optimal values among the scenarios, as well as the optimal policies producing those values. Especially noteworthy is the influence of monitoring cadence on valuation in some years. We highlight patterns in policy and valuation among the scenarios, and discuss management implications and extensions.
Optimization of wind plant layouts using an adjoint approach
King, Ryan N.; Dykes, Katherine; Graf, Peter; ...
2017-03-10
Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less
Optimization of wind plant layouts using an adjoint approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Ryan N.; Dykes, Katherine; Graf, Peter
Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less
Image processing occupancy sensor
Brackney, Larry J.
2016-09-27
A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.
Effects of roads on survival of San Clemente Island foxes
Snow, N.P.; Andelt, William F.; Stanley, T.R.; Resnik, J.R.; Munson, L.
2012-01-01
Roads generate a variety of influences on wildlife populations; however, little is known about the effects of roads on endemic wildlife on islands. Specifically, road-kills of island foxes (Urocyon littoralis) on San Clemente Island (SCI), Channel Islands, California, USA are a concern for resource managers. To determine the effects of roads on island foxes, we radiocollared foxes using a 3-tiered sampling design to represent the entire population in the study area, a sub-population near roads, and a sub-population away from roads on SCI. We examined annual survival rates using nest-survival models, causes of mortalities, and movements for each sample. We found the population had high annual survival (0.90), although survival declined with use of road habitat, particularly for intermediate-aged foxes. Foxes living near roads suffered lower annual survival (0.76), resulting from high frequencies of road-kills (7 of 11 mortalities). Foxes living away from roads had the highest annual survival (0.97). Road-kill was the most prominent cause of mortality detected on SCI, which we estimated as killing 3-8% of the population in the study area annually. Based on movements, we were unable to detect any responses by foxes that minimized their risks from roads. The probabilities of road-kills increased with use of the road habitat, volume of traffic, and decreasing road sinuosity. We recommend that managers should attempt to reduce road-kills by deterring or excluding foxes from entering roads, and attempting to modify behaviors of motorists to be vigilant for foxes. ?? 2011 The Wildlife Society.
NASA Astrophysics Data System (ADS)
Ashjian, C. J.; Okkonen, S. R.; Campbell, R. G.; Alatalo, P.
2014-12-01
Late summer physical and biological conditions along a 37-km transect crossing Barrow Canyon have been described for the past ten years as part of an ongoing program, supported by multiple funding sources including the NSF AON, focusing on inter-annual variability and the formation of a bowhead whale feeding hotspot near Barrow. These repeated transects (at least two per year, separated in time by days-weeks) provide an opportunity to assess the inter-annual and shorter term (days-weeks) changes in hydrographic structure, ocean temperature, current velocity and transport, chlorophyll fluorescence, nutrients, and micro- and mesozooplankton community composition and abundance. Inter-annual variability in all properties was high and was associated with larger scale, meteorological forcing. Shorter-term variability could also be high but was strongly influenced by changes in local wind forcing. The sustained sampling at this location provided critical measures of inter-annual variability that should permit detection of longer-term trends that are associated with ongoing climate change.
NASA Astrophysics Data System (ADS)
Zhou, Sheng; Han, Yanling; Li, Bincheng
2018-02-01
Nitric oxide (NO) in exhaled breath has gained increasing interest in recent years mainly driven by the clinical need to monitor inflammatory status in respiratory disorders, such as asthma and other pulmonary conditions. Mid-infrared cavity ring-down spectroscopy (CRDS) using an external cavity, widely tunable continuous-wave quantum cascade laser operating at 5.3 µm was employed for NO detection. The detection pressure was reduced in steps to improve the sensitivity, and the optimal pressure was determined to be 15 kPa based on the fitting residual analysis of measured absorption spectra. A detection limit (1σ, or one time of standard deviation) of 0.41 ppb was experimentally achieved for NO detection in human breath under the optimized condition in a total of 60 s acquisition time (2 s per data point). Diurnal measurement session was conducted for exhaled NO. The experimental results indicated that mid-infrared CRDS technique has great potential for various applications in health diagnosis.
Joint Improved Explosive Device Defeat Organization. Annual Report FY 2009
2009-01-01
incident, IEDs were 14 percent more effective in FY 2009 over FY 2008. Coupled with the significant corresponding increase in IED incidents, this...Testing and Evaluation (RDT&E), procurement, and sustainment of 69 systems. Detect Air Detect Air systems enable the warfighter to detect insurgent...oriented centers of excellence (COE): the Army COE at Fort Irwin; the Navy COE at Indian Head, Maryland; the Air Force COE at Lackland Air Force
NASA Astrophysics Data System (ADS)
Lu, M.; Lall, U.
2013-12-01
In order to mitigate the impacts of climate change, proactive management strategies to operate reservoirs and dams are needed. A multi-time scale climate informed stochastic model is developed to optimize the operations for a multi-purpose single reservoir by simulating decadal, interannual, seasonal and sub-seasonal variability. We apply the model to a setting motivated by the largest multi-purpose dam in N. India, the Bhakhra reservoir on the Sutlej River, a tributary of the Indus. This leads to a focus on timing and amplitude of the flows for the monsoon and snowmelt periods. The flow simulations are constrained by multiple sources of historical data and GCM future projections, that are being developed through a NSF funded project titled 'Decadal Prediction and Stochastic Simulation of Hydroclimate Over Monsoon Asia'. The model presented is a multilevel, nonlinear programming model that aims to optimize the reservoir operating policy on a decadal horizon and the operation strategy on an updated annual basis. The model is hierarchical, in terms of having a structure that two optimization models designated for different time scales are nested as a matryoshka doll. The two optimization models have similar mathematical formulations with some modifications to meet the constraints within that time frame. The first level of the model is designated to provide optimization solution for policy makers to determine contracted annual releases to different uses with a prescribed reliability; the second level is a within-the-period (e.g., year) operation optimization scheme that allocates the contracted annual releases on a subperiod (e.g. monthly) basis, with additional benefit for extra release and penalty for failure. The model maximizes the net benefit of irrigation, hydropower generation and flood control in each of the periods. The model design thus facilitates the consistent application of weather and climate forecasts to improve operations of reservoir systems. The decadal flow simulations are re-initialized every year with updated climate projections to improve the reliability of the operation rules for the next year, within which the seasonal operation strategies are nested. The multi-level structure can be repeated for monthly operation with weekly subperiods to take advantage of evolving weather forecasts and seasonal climate forecasts. As a result of the hierarchical structure, sub-seasonal even weather time scale updates and adjustment can be achieved. Given an ensemble of these scenarios, the McISH reservoir simulation-optimization model is able to derive the desired reservoir storage levels, including minimum and maximum, as a function of calendar date, and the associated release patterns. The multi-time scale approach allows adaptive management of water supplies acknowledging the changing risks, meeting both the objectives over the decade in expected value and controlling the near term and planning period risk through probabilistic reliability constraints. For the applications presented, the target season is the monsoon season from June to September. The model also includes a monthly flood volume forecast model, based on a Copula density fit to the monthly flow and the flood volume flow. This is used to guide dynamic allocation of the flood control volume given the forecasts.
Penny, Christian; Grothendick, Beau; Zhang, Lin; Borror, Connie M.; Barbano, Duane; Cornelius, Angela J.; Gilpin, Brent J.; Fagerquist, Clifton K.; Zaragoza, William J.; Jay-Russell, Michele T.; Lastovica, Albert J.; Ragimbeau, Catherine; Cauchie, Henry-Michel; Sandrin, Todd R.
2016-01-01
MALDI-TOF MS has been utilized as a reliable and rapid tool for microbial fingerprinting at the genus and species levels. Recently, there has been keen interest in using MALDI-TOF MS beyond the genus and species levels to rapidly identify antibiotic resistant strains of bacteria. The purpose of this study was to enhance strain level resolution for Campylobacter jejuni through the optimization of spectrum processing parameters using a series of designed experiments. A collection of 172 strains of C. jejuni were collected from Luxembourg, New Zealand, North America, and South Africa, consisting of four groups of antibiotic resistant isolates. The groups included: (1) 65 strains resistant to cefoperazone (2) 26 resistant to cefoperazone and beta-lactams (3) 5 strains resistant to cefoperazone, beta-lactams, and tetracycline, and (4) 76 strains resistant to cefoperazone, teicoplanin, amphotericin, B and cephalothin. Initially, a model set of 16 strains (three biological replicates and three technical replicates per isolate, yielding a total of 144 spectra) of C. jejuni was subjected to each designed experiment to enhance detection of antibiotic resistance. The most optimal parameters were applied to the larger collection of 172 isolates (two biological replicates and three technical replicates per isolate, yielding a total of 1,031 spectra). We observed an increase in antibiotic resistance detection whenever either a curve based similarity coefficient (Pearson or ranked Pearson) was applied rather than a peak based (Dice) and/or the optimized preprocessing parameters were applied. Increases in antimicrobial resistance detection were scored using the jackknife maximum similarity technique following cluster analysis. From the first four groups of antibiotic resistant isolates, the optimized preprocessing parameters increased detection respective to the aforementioned groups by: (1) 5% (2) 9% (3) 10%, and (4) 2%. An additional second categorization was created from the collection consisting of 31 strains resistant to beta-lactams and 141 strains sensitive to beta-lactams. Applying optimal preprocessing parameters, beta-lactam resistance detection was increased by 34%. These results suggest that spectrum processing parameters, which are rarely optimized or adjusted, affect the performance of MALDI-TOF MS-based detection of antibiotic resistance and can be fine-tuned to enhance screening performance. PMID:27303397
BMI and BMI SDS in childhood: annual increments and conditional change.
Brannsether, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Júlíusson, Pétur Benedikt
2017-02-01
Background Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods The distributions of 1-year increments of BMI (kg/m 2 ) and BMI SDS are summarised by percentiles. Differences according to sex, age, height, weight, initial BMI and weight status on the BMI and BMI SDS increments were assessed with multiple linear regression. Conditional change in BMI SDS was based on the correlation between annual BMI measurements converted to SDS. Results BMI increments depended significantly on sex, height, weight and initial BMI. Changes in BMI SDS depended significantly only on the initial BMI SDS. The distribution of conditional change in BMI SDS using a two-correlation model was close to normal (mean = 0.11, SD = 1.02, n = 1167), with 3.2% (2.3-4.4%) of the observations below -2 SD and 2.8% (2.0-4.0%) above +2 SD. Conclusion Conditional change in BMI SDS can be used to detect unexpected large changes in BMI SDS. Although this method requires the use of a computer, it may be clinically useful to detect aberrant weight development.
Ho, Karen S; Twede, Hope; Vanzo, Rena; Harward, Erin; Hensel, Charles H; Martin, Megan M; Page, Stephanie; Peiffer, Andreas; Mowery-Rushton, Patricia; Serrano, Moises; Wassman, E Robert
2016-01-01
Copy number variants (CNVs) as detected by chromosomal microarray analysis (CMA) significantly contribute to the etiology of neurodevelopmental disorders, such as developmental delay (DD), intellectual disability (ID), and autism spectrum disorder (ASD). This study summarizes the results of 3.5 years of CMA testing by a CLIA-certified clinical testing laboratory 5487 patients with neurodevelopmental conditions were clinically evaluated for rare copy number variants using a 2.8-million probe custom CMA optimized for the detection of CNVs associated with neurodevelopmental disorders. We report an overall detection rate of 29.4% in our neurodevelopmental cohort, which rises to nearly 33% when cases with DD/ID and/or MCA only are considered. The detection rate for the ASD cohort is also significant, at 25%. Additionally, we find that detection rate and pathogenic yield of CMA vary significantly depending on the primary indications for testing, the age of the individuals tested, and the specialty of the ordering doctor. We also report a significant difference between the detection rate on the ultrahigh resolution optimized array in comparison to the array from which it originated. This increase in detection can significantly contribute to the efficient and effective medical management of neurodevelopmental conditions in the clinic.
Economopoulou, M A; Economopoulou, A A; Economopoulos, A P
2013-11-01
The paper describes a software system capable of formulating alternative optimal Municipal Solid Wastes (MSWs) management plans, each of which meets a set of constraints that may reflect selected objections and/or wishes of local communities. The objective function to be minimized in each plan is the sum of the annualized capital investment and annual operating cost of all transportation, treatment and final disposal operations involved, taking into consideration the possible income from the sale of products and any other financial incentives or disincentives that may exist. For each plan formulated, the system generates several reports that define the plan, analyze its cost elements and yield an indicative profile of selected types of installations, as well as data files that facilitate the geographic representation of the optimal solution in maps through the use of GIS. A number of these reports compare the technical and economic data from all scenarios considered at the study area, municipality and installation level constituting in effect sensitivity analysis. The generation of alternative plans offers local authorities the opportunity of choice and the results of the sensitivity analysis allow them to choose wisely and with consensus. The paper presents also an application of this software system in the capital Region of Attica in Greece, for the purpose of developing an optimal waste transportation system in line with its approved waste management plan. The formulated plan was able to: (a) serve 113 Municipalities and Communities that generate nearly 2 milliont/y of comingled MSW with distinctly different waste collection patterns, (b) take into consideration several existing waste transfer stations (WTS) and optimize their use within the overall plan, (c) select the most appropriate sites among the potentially suitable (new and in use) ones, (d) generate the optimal profile of each WTS proposed, and (e) perform sensitivity analysis so as to define the impact of selected sets of constraints (limitations in the availability of sites and in the capacity of their installations) on the design and cost of the ensuing optimal waste transfer system. The results show that optimal planning offers significant economic savings to municipalities, while reducing at the same time the present levels of traffic, fuel consumptions and air emissions in the congested Athens basin. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hishikawa, Yoshitaka; An, Shucai; Yamamoto-Fukuda, Tomomi; Shibata, Yasuaki; Koji, Takehiko
2009-01-01
In situ polymerase chain reaction (in situ PCR), which can detect a few copies of genes within a cell by amplifying the target gene, was developed to better understand the biological functions of tissues. In this study, we optimized the protocol conditions for the detection of X chromosome-linked phosphoglycerate kinase-1 (pgk-1) gene in paraffin-embedded sections of mouse reproductive organs. The effects of various concentrations of proteinase K (PK) and PCR cycle numbers were examined. To label the amplified DNA, we used digoxigenin-dUTP (Dig), Cy-3-dUTP (Cy-3), or FluorX-dCTP (FluorX). The optimal concentration of PK was 50 µg/ml for the ovary and 10 µg/ml for the testis. Ten PCR cycles were optimal for Dig and 25 cycles were optimal for FluorX and Cy-3 in the ovary and testis. The signal-to-noise ratio of FluorX and Cy-3 for ovarian tissue was better than that of Dig. Using the above conditions, we detected 1–4 and 1–2 spots of pgk-1 in the nuclei of granulosa and germ cells, respectively. Our results indicate that in situ PCR is useful for detecting a specific gene in paraffin-embedded sections under optimized conditions of both PCR cycle number and PK concentration. PMID:19492023
An aquatic macroinvertebrate monitoring program is suggested for 'early warning' detection of toxic discharges to streams in oil shale development areas. Changes in stream biota are used to signal need for increasing levels of chemical analyses to identify and quantify toxic poll...
Aerial Detection, Ground Evaluation, and Monitoring of the Southern Pine Beetle: State Perspectives
Ronald F. Billings
2011-01-01
The southern pine beetle (SPB), is recognized as the most serious insect pest of southern pine forests. Outbreaks occur almost every year somewhere within its wide range, requiring intensive suppression efforts to minimize resource losses to Federal, State, and private forests. Effective management involves annual monitoring of SPB populations and aerial detection and...
40 CFR 63.7917 - What are my inspection and monitoring requirements for transfer systems?
Code of Federal Regulations, 2010 CFR
2010-07-01
... annually inspect the unburied portion of pipeline and all joints for leaks and other defects. In the event that a defect is detected, you must repair the leak or defect according to the requirements of... days after detection and repair shall be completed as soon as possible but no later than 45 calendar...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Jiang, Huaiguang; Tan, Jin
This paper proposes an event-driven approach for reconfiguring distribution systems automatically. Specifically, an optimal synchrophasor sensor placement (OSSP) is used to reduce the number of synchrophasor sensors while keeping the whole system observable. Then, a wavelet-based event detection and location approach is used to detect and locate the event, which performs as a trigger for network reconfiguration. With the detected information, the system is then reconfigured using the hierarchical decentralized approach to seek for the new optimal topology. In this manner, whenever an event happens the distribution network can be reconfigured automatically based on the real-time information that is observablemore » and detectable.« less
Lee, Junghoon; Lee, Joosung; Song, Sangha; Lee, Hyunsook; Lee, Kyoungjoung; Yoon, Youngro
2008-01-01
Automatic detection of suspicious pain regions is very useful in the medical digital infrared thermal imaging research area. To detect those regions, we use the SOFES (Survival Of the Fitness kind of the Evolution Strategy) algorithm which is one of the multimodal function optimization methods. We apply this algorithm to famous diseases, such as a foot of the glycosuria, the degenerative arthritis and the varicose vein. The SOFES algorithm is available to detect some hot spots or warm lines as veins. And according to a hundred of trials, the algorithm is very fast to converge.
Image gathering and processing - Information and fidelity
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Halyo, N.; Samms, R. W.; Stacy, K.
1985-01-01
In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, combining optical design with image-forming and edge-detection algorithms. The optical design of the image-gathering system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio (SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficient sampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formulations account for. Performance analyses and simulations for ordinary optical-design constraints and random scences indicate that (1) different image-forming algorithms prefer different optical designs; (2) informationally optimized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequency channel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high); and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processing algorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr's model of) human vision and thus reduces the data preprocessing and transmission required for machine vision.
The optimal community detection of software based on complex networks
NASA Astrophysics Data System (ADS)
Huang, Guoyan; Zhang, Peng; Zhang, Bing; Yin, Tengteng; Ren, Jiadong
2016-02-01
The community structure is important for software in terms of understanding the design patterns, controlling the development and the maintenance process. In order to detect the optimal community structure in the software network, a method Optimal Partition Software Network (OPSN) is proposed based on the dependency relationship among the software functions. First, by analyzing the information of multiple execution traces of one software, we construct Software Execution Dependency Network (SEDN). Second, based on the relationship among the function nodes in the network, we define Fault Accumulation (FA) to measure the importance of the function node and sort the nodes with measure results. Third, we select the top K(K=1,2,…) nodes as the core of the primal communities (only exist one core node). By comparing the dependency relationships between each node and the K communities, we put the node into the existing community which has the most close relationship. Finally, we calculate the modularity with different initial K to obtain the optimal division. With experiments, the method OPSN is verified to be efficient to detect the optimal community in various softwares.
Ritchie, Marylyn D; White, Bill C; Parker, Joel S; Hahn, Lance W; Moore, Jason H
2003-01-01
Background Appropriate definition of neural network architecture prior to data analysis is crucial for successful data mining. This can be challenging when the underlying model of the data is unknown. The goal of this study was to determine whether optimizing neural network architecture using genetic programming as a machine learning strategy would improve the ability of neural networks to model and detect nonlinear interactions among genes in studies of common human diseases. Results Using simulated data, we show that a genetic programming optimized neural network approach is able to model gene-gene interactions as well as a traditional back propagation neural network. Furthermore, the genetic programming optimized neural network is better than the traditional back propagation neural network approach in terms of predictive ability and power to detect gene-gene interactions when non-functional polymorphisms are present. Conclusion This study suggests that a machine learning strategy for optimizing neural network architecture may be preferable to traditional trial-and-error approaches for the identification and characterization of gene-gene interactions in common, complex human diseases. PMID:12846935
NASA Astrophysics Data System (ADS)
Lee, Daeho; Lee, Seohyung
2017-11-01
We propose an image stitching method that can remove ghost effects and realign the structure misalignments that occur in common image stitching methods. To reduce the artifacts caused by different parallaxes, an optimal seam pair is selected by comparing the cross correlations from multiple seams detected by variable cost weights. Along the optimal seam pair, a histogram of oriented gradients is calculated, and feature points for matching are detected. The homography is refined using the matching points, and the remaining misalignment is eliminated using the propagation of deformation vectors calculated from matching points. In multiband blending, the overlapping regions are determined from a distance between the matching points to remove overlapping artifacts. The experimental results show that the proposed method more robustly eliminates misalignments and overlapping artifacts than the existing method that uses single seam detection and gradient features.
Effect of processing on the detectability of peanut protein by ELISA.
Iqbal, Amjad; Ateeq, Nadia
2013-12-01
Chicken IgY was used for the detection and quantification of peanut proteins by indirect competitive ELISA. The method was optimized by using a checker board approach to determine the optimal concentration of coating antigen, primary antibody and secondary antibody. Peanut protein could be detected in foods down to levels of 10 ppm. The effect of physical (heat treatment at 80 °C and 100 °C) and chemical (acid, alkali and reducing sugar) treatments on the IgY binding of peanut proteins was investigated. The optimized assay was relatively sensitive for the roasted peanut proteins. However, the binding ability of chicken IgYs to peanut proteins was found to be significantly altered by denaturation and hydrolysis of proteins. It was also observed that the effect of Millard chemistry on the detectability of peanut protein was less pronounced at high temperatures than at low temperatures. Copyright © 2013 Elsevier Ltd. All rights reserved.
ECG Based Heart Arrhythmia Detection Using Wavelet Coherence and Bat Algorithm
NASA Astrophysics Data System (ADS)
Kora, Padmavathi; Sri Rama Krishna, K.
2016-12-01
Atrial fibrillation (AF) is a type of heart abnormality, during the AF electrical discharges in the atrium are rapid, results in abnormal heart beat. The morphology of ECG changes due to the abnormalities in the heart. This paper consists of three major steps for the detection of heart diseases: signal pre-processing, feature extraction and classification. Feature extraction is the key process in detecting the heart abnormality. Most of the ECG detection systems depend on the time domain features for cardiac signal classification. In this paper we proposed a wavelet coherence (WTC) technique for ECG signal analysis. The WTC calculates the similarity between two waveforms in frequency domain. Parameters extracted from WTC function is used as the features of the ECG signal. These features are optimized using Bat algorithm. The Levenberg Marquardt neural network classifier is used to classify the optimized features. The performance of the classifier can be improved with the optimized features.
SABRE: WIMP modulation detection in the northern and southern hemisphere
NASA Astrophysics Data System (ADS)
Froborg, F.;
2016-05-01
Measuring an annual modulation in a direct Dark Matter detection experiment is not only a proof of the existence of WIMPs but can also tell us more about their interaction with standard matter and maybe even their density and velocity in the halo. Such a modulation has been measured by the DAMA/LIBRA experiment in NaI(Tl) crystals. However, the interpretation as WIMP signal is controversial due to contradicting results by other experiments. The SABRE experiment aims to shed light on this controversy by detecting the annual modulation in the same target material as DAMA with twin detectors at LNGS in Italy and at SUPL in Australia. The two locations in the northern and southern hemisphere allow to verify if other seasonal effects or the site have an influence on the measurement, thus reducing systematic effects. This paper will give an overview on the experimental design, the current status of the proof of principle phase mainly devoted to high-purity crystal growing, and an outlook on future plans.
Parsley, M.J.; Kofoot, P.
2006-01-01
River discharge and water temperatures that occurred during April through July 2004 provided conditions suitable for spawning by white sturgeon downstream from Bonneville, The Dalles, John Day, and McNary dams. Optimal spawning temperatures in the four tailraces occurred for 3-4 weeks and coincided with the peak of the river hydrograph. However, the peak of the hydrograph was relatively low compared to past years, which is reflected in the relatively low monthly and annual indices of suitable spawning habitat. Bottom-trawl sampling in the Bonneville Reservoir revealed the presence of young-of-theyear (YOY) white sturgeon.
Proceedings of the 8th Annual Summer Conference: NASA/USRA Advanced Design Program
NASA Technical Reports Server (NTRS)
1992-01-01
Papers presented at the 8th Annual Summer Conference are categorized as Space Projects and Aeronautics projects. Topics covered include: Systematic Propulsion Optimization Tools (SPOT), Assured Crew Return Vehicle Post Landing Configuration Design and Test, Autonomous Support for Microorganism Research in Space, Bioregenerative System Components for Microgravity, The Extended Mission Rover (EMR), Planetary Surface Exploration MESUR/Autonomous Lunar Rover, Automation of Closed Environments in Space for Human Comfort and Safety, Walking Robot Design, Extraterrestrial Surface Propulsion Systems, The Design of Four Hypersonic Reconnaissance Aircraft, Design of a Refueling Tanker Delivering Liquid Hydrogen, The Design of a Long-Range Megatransport Aircraft, and Solar Powered Multipurpose Remotely Powered Aircraft.
2017-07-01
Envelope, Interoperable Shaped Offset QPSK (SOQPSK) Waveform for Improved Spectral Efficiency.” Paper presented during 36th Annual International...B20x0y2.pdf. 12 Mark Geoghegan. “Implementation and Performance Results for Trellis Detection of SOQPSK.” Paper presented at the 37th Annual...between h1 and h2 where h1 = 4/16, h2 = 5/16. For more information on the ARTM CPM waveform, please refer to 0 and to Geoghegan’s paper .14 2.3.3.4 Data
Chen, Huai; Zhu, Lijun; Wang, Jianzhong; Fan, Hongxia; Wang, Zhihuan
2017-07-01
This study focuses on detecting trends in annual runoff volume and sediment load in the Yangtze river-lake system. Times series of annual runoff volume and sediment load at 19 hydrological gauging stations for the period 1956-2013 were collected. Based on the Mann-Kendall test at the 1% significance level, annual sediment loads in the Yangtze River, the Dongting Lake and the Poyang Lake were detected with significantly descending trends. The power spectrum estimation indicated predominant oscillations with periods of 8 and 20 years are embedded in the runoff volume series, probably related to the El Niño Southern Oscillation (2-7 years) and Pacific Decadal Oscillation (20-30 years). Based on dominant components (capturing more than roughly 90% total energy) extracted by the proper orthogonal decomposition method, total change ratios of runoff volume and sediment load during the last 58 years were evaluated. For sediment load, the mean CRT value in the Yangtze River is about -65%, and those in the Dongting Lake and the Poyang Lake are -92.2% and -87.9% respectively. Particularly, the CRT value of the sediment load in the channel inflow of the Dongting Lake is even -99.7%. The Three Gorges Dam has intercepted a large amount of sediment load and decreased the sediment load downstream.
Michielsen, Koen; Nuyts, Johan; Cockmartin, Lesley; Marshall, Nicholas; Bosmans, Hilde
2016-12-01
In this work, the authors design and validate a model observer that can detect groups of microcalcifications in a four-alternative forced choice experiment and use it to optimize a smoothing prior for detectability of microcalcifications. A channelized Hotelling observer (CHO) with eight Laguerre-Gauss channels was designed to detect groups of five microcalcifications in a background of acrylic spheres by adding the CHO log-likelihood ratios calculated at the expected locations of the five calcifications. This model observer is then applied to optimize the detectability of the microcalcifications as a function of the smoothing prior. The authors examine the quadratic and total variation (TV) priors, and a combination of both. A selection of these reconstructions was then evaluated by human observers to validate the correct working of the model observer. The authors found a clear maximum for the detectability of microcalcification when using the total variation prior with weight β TV = 35. Detectability only varied over a small range for the quadratic and combined quadratic-TV priors when weight β Q of the quadratic prior was changed by two orders of magnitude. Spearman correlation with human observers was good except for the highest value of β for the quadratic and TV priors. Excluding those, the authors found ρ = 0.93 when comparing detection fractions, and ρ = 0.86 for the fitted detection threshold diameter. The authors successfully designed a model observer that was able to predict human performance over a large range of settings of the smoothing prior, except for the highest values of β which were outside the useful range for good image quality. Since detectability only depends weakly on the strength of the combined prior, it is not possible to pick an optimal smoothness based only on this criterion. On the other hand, such choice can now be made based on other criteria without worrying about calcification detectability.
On optimal infinite impulse response edge detection filters
NASA Technical Reports Server (NTRS)
Sarkar, Sudeep; Boyer, Kim L.
1991-01-01
The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.
Optimal and adaptive methods of processing hydroacoustic signals (review)
NASA Astrophysics Data System (ADS)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
Optimal pulse design for communication-oriented slow-light pulse detection.
Stenner, Michael D; Neifeld, Mark A
2008-01-21
We present techniques for designing pulses for linear slow-light delay systems which are optimal in the sense that they maximize the signal-to-noise ratio (SNR) and signal-to-noise-plus-interference ratio (SNIR) of the detected pulse energy. Given a communication model in which input pulses are created in a finite temporal window and output pulse energy in measured in a temporally-offset output window, the SNIR-optimal pulses achieve typical improvements of 10 dB compared to traditional pulse shapes for a given output window offset. Alternatively, for fixed SNR or SNIR, window offset (detection delay) can be increased by 0.3 times the window width. This approach also invites a communication-based model for delay and signal fidelity.
Sequence Optimized Real-Time RT-PCR Assay for Detection of Crimean-Congo Hemorrhagic Fever Virus
2017-03-21
19-23]. Real-56 time reverse-transcription PCR remains the gold standard for quantitative , sensitive, and specific 57 detection of CCHFV; however...five-fold in two different series , and samples were run by real- time RT-PCR 116 in triplicate. The preliminary LOD was the lowest RNA dilution where...1 Sequence optimized real- time RT-PCR assay for detection of Crimean-Congo hemorrhagic fever 1 virus 2 3 JW Koehler1, KL Delp1, AT Hall1, SP
2012-07-01
detection only condition followed either face detection only or dual task, thus ensuring that participants were practiced in face detection before...1 ARMY RSCH LABORATORY – HRED RDRL HRM C A DAVISON 320 MANSCEN LOOP STE 115 FORT LEONARD WOOD MO 65473 2 ARMY RSCH LABORATORY...HRED RDRL HRM DI T DAVIS J HANSBERGER BLDG 5400 RM C242 REDSTONE ARSENAL AL 35898-7290 1 ARMY RSCH LABORATORY – HRED RDRL HRS
2006-06-01
Machine Guidance Using LocataNet In this pilot study [3], conducted at the BlueScope Steel warehouse in Port Kembla, Australia, the LocataNet system...Study at BlueScope Steel”. Proceedings of the 2004 Annual Meeting of the Institute of Navigation. Dayton, OH, June 2004. 4. Barnes, Joel, Chris
Optimizing the Uptake of Health Checks for People with Intellectual Disabilities
ERIC Educational Resources Information Center
McConkey, Roy; Taggart, Laurence; Kane, Molly
2015-01-01
The provision of an annual health check for adult persons with an intellectual disability is intended to counter the health inequalities experienced by this population. This study documents the uptake of checks across general practitioner (GP) practices in Northern Ireland over a 3-year period. In all, 84% of GP practices provided health checks…
Assessing the impact of heart failure specialist services on patient populations.
Lyratzopoulos, Georgios; Cook, Gary A; McElduff, Patrick; Havely, Daniel; Edwards, Richard; Heller, Richard F
2004-05-24
The assessment of the impact of healthcare interventions may help commissioners of healthcare services to make optimal decisions. This can be particularly the case if the impact assessment relates to specific patient populations and uses timely local data. We examined the potential impact on readmissions and mortality of specialist heart failure services capable of delivering treatments such as b-blockers and Nurse-Led Educational Intervention (N-LEI). Statistical modelling of prevented or postponed events among previously hospitalised patients, using estimates of: treatment uptake and contraindications (based on local audit data); treatment effectiveness and intolerance (based on literature); and annual number of hospitalization per patient and annual risk of death (based on routine data). Optimal treatment uptake among eligible but untreated patients would over one year prevent or postpone 11% of all expected readmissions and 18% of all expected deaths for spironolactone, 13% of all expected readmisisons and 22% of all expected deaths for b-blockers (carvedilol) and 20% of all expected readmissions and an uncertain number of deaths for N-LEI. Optimal combined treatment uptake for all three interventions during one year among all eligible but untreated patients would prevent or postpone 37% of all expected readmissions and a minimum of 36% of all expected deaths. In a population of previously hospitalised patients with low previous uptake of b-blockers and no uptake of N-LEI, optimal combined uptake of interventions through specialist heart failure services can potentially help prevent or postpone approximately four times as many readmissions and a minimum of twice as many deaths compared with simply optimising uptake of spironolactone (not necessarily requiring specialist services). Examination of the impact of different heart failure interventions can inform rational planning of relevant healthcare services.
NASA Astrophysics Data System (ADS)
Haack, Lukas; Peniche, Ricardo; Sommer, Lutz; Kather, Alfons
2017-06-01
At early project stages, the main CSP plant design parameters such as turbine capacity, solar field size, and thermal storage capacity are varied during the techno-economic optimization to determine most suitable plant configurations. In general, a typical meteorological year with at least hourly time resolution is used to analyze each plant configuration. Different software tools are available to simulate the annual energy yield. Software tools offering a thermodynamic modeling approach of the power block and the CSP thermal cycle, such as EBSILONProfessional®, allow a flexible definition of plant topologies. In EBSILON, the thermodynamic equilibrium for each time step is calculated iteratively (quasi steady state), which requires approximately 45 minutes to process one year with hourly time resolution. For better presentation of gradients, 10 min time resolution is recommended, which increases processing time by a factor of 5. Therefore, analyzing a large number of plant sensitivities, as required during the techno-economic optimization procedure, the detailed thermodynamic simulation approach becomes impracticable. Suntrace has developed an in-house CSP-Simulation tool (CSPsim), based on EBSILON and applying predictive models, to approximate the CSP plant performance for central receiver and parabolic trough technology. CSPsim significantly increases the speed of energy yield calculations by factor ≥ 35 and has automated the simulation run of all predefined design configurations in sequential order during the optimization procedure. To develop the predictive models, multiple linear regression techniques and Design of Experiment methods are applied. The annual energy yield and derived LCOE calculated by the predictive model deviates less than ±1.5 % from the thermodynamic simulation in EBSILON and effectively identifies the optimal range of main design parameters for further, more specific analysis.
On the pilot's behavior of detecting a system parameter change
NASA Technical Reports Server (NTRS)
Morizumi, N.; Kimura, H.
1986-01-01
The reaction of a human pilot, engaged in compensatory control, to a sudden change in the controlled element's characteristics is described. Taking the case where the change manifests itself as a variance change of the monitored signal, it is shown that the detection time, defined to be the time elapsed until the pilot detects the change, is related to the monitored signal and its derivative. Then, the detection behavior is modeled by an optimal controller, an optimal estimator, and a variance-ratio test mechanism that is performed for the monitored signal and its derivative. Results of a digital simulation show that the pilot's detection behavior can be well represented by the model proposed here.
NASA Astrophysics Data System (ADS)
Cohen, J. S.; McGarity, A. E.
2017-12-01
The ability for mass deployment of green stormwater infrastructure (GSI) to intercept significant amounts of urban runoff has the potential to reduce the frequency of a city's combined sewer overflows (CSOs). This study was performed to aid in the Overbrook Environmental Education Center's vision of applying this concept to create a Green Commercial Corridor in Philadelphia's Overbrook Neighborhood, which lies in the Mill Creek Sewershed. In an attempt to further implement physical and social reality into previous work using simulation-optimization techniques to produce GSI deployment strategies (McGarity, et al., 2016), this study's models incorporated land use types and a specific neighborhood in the sewershed. The low impact development (LID) feature in EPA's Storm Water Management Model (SWMM) was used to simulate various geographic configurations of GSI in Overbrook. The results from these simulations were used to obtain formulas describing the annual CSO reduction in the sewershed based on the deployed GSI practices. These non-linear hydrologic response formulas were then implemented into the Storm Water Investment Strategy Evaluation (StormWISE) model (McGarity, 2012), a constrained optimization model used to develop optimal stormwater management practices on the watershed scale. By saturating the avenue with GSI, not only will CSOs from the sewershed into the Schuylkill River be reduced, but ancillary social and economic benefits of GSI will also be achieved. The effectiveness of these ancillary benefits changes based on the type of GSI practice and the type of land use in which the GSI is implemented. Thus, the simulation and optimization processes were repeated while delimiting GSI deployment by land use (residential, commercial, industrial, and transportation). The results give a GSI deployment strategy that achieves desired annual CSO reductions at a minimum cost based on the locations of tree trenches, rain gardens, and rain barrels in specified land use types.
Jiang, Songhui; Templeton, Michael R.; He, Gengsheng; Qu, Weidong
2013-01-01
An optimized method is presented using liquid-liquid extraction and derivatization for the extraction of iodoacetic acid (IAA) and other haloacetic acids (HAA9) and direct extraction of iodoform (IF) and other trihalomethanes (THM4) from drinking water, followed by detection by gas chromatography with electron capture detection (GC-ECD). A Doehlert experimental design was performed to determine the optimum conditions for the five most significant factors in the derivatization step: namely, the volume and concentration of acidic methanol (optimized values = 15%, 1 mL), the volume and concentration of Na2SO4 solution (129 g/L, 8.5 mL), and the volume of saturated NaHCO3 solution (1 mL). Also, derivatization time and temperature were optimized by a two-variable Doehlert design, resulting in the following optimized parameters: an extraction time of 11 minutes for IF and THM4 and 14 minutes for IAA and HAA9; mass of anhydrous Na2SO4 of 4 g for IF and THM4 and 16 g for IAA and HAA9; derivatization time of 160 min and temperature at 40°C. Under optimal conditions, the optimized procedure achieves excellent linearity (R2 ranges 0.9990–0.9998), low detection limits (0.0008–0.2 µg/L), low quantification limits (0.008–0.4 µg/L), and good recovery (86.6%–106.3%). Intra- and inter-day precision were less than 8.9% and 8.8%, respectively. The method was validated by applying it to the analysis of raw, flocculated, settled, and finished waters collected from a water treatment plant in China. PMID:23613747
Castle, John; Garrett-Engele, Phil; Armour, Christopher D; Duenwald, Sven J; Loerch, Patrick M; Meyer, Michael R; Schadt, Eric E; Stoughton, Roland; Parrish, Mark L; Shoemaker, Daniel D; Johnson, Jason M
2003-01-01
Microarrays offer a high-resolution means for monitoring pre-mRNA splicing on a genomic scale. We have developed a novel, unbiased amplification protocol that permits labeling of entire transcripts. Also, hybridization conditions, probe characteristics, and analysis algorithms were optimized for detection of exons, exon-intron edges, and exon junctions. These optimized protocols can be used to detect small variations and isoform mixtures, map the tissue specificity of known human alternative isoforms, and provide a robust, scalable platform for high-throughput discovery of alternative splicing.
Castle, John; Garrett-Engele, Phil; Armour, Christopher D; Duenwald, Sven J; Loerch, Patrick M; Meyer, Michael R; Schadt, Eric E; Stoughton, Roland; Parrish, Mark L; Shoemaker, Daniel D; Johnson, Jason M
2003-01-01
Microarrays offer a high-resolution means for monitoring pre-mRNA splicing on a genomic scale. We have developed a novel, unbiased amplification protocol that permits labeling of entire transcripts. Also, hybridization conditions, probe characteristics, and analysis algorithms were optimized for detection of exons, exon-intron edges, and exon junctions. These optimized protocols can be used to detect small variations and isoform mixtures, map the tissue specificity of known human alternative isoforms, and provide a robust, scalable platform for high-throughput discovery of alternative splicing. PMID:14519201
Gang, G J; Siewerdsen, J H; Stayman, J W
2017-02-11
This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
2005-04-01
AD Award Number: DAIMD17-03-1-0222 TITLE: Detection of Serum Lysophosphatidic Acids Using Affinity Binding and Surface Enhanced Laser Desorption...Annual (1 Apr 04 - 31 Mar 05) 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS Detection of Serum Lysophosphatidic Acids Using Affinity DAMD17-03-1-0222...of multiple forms of lysophosphatidic acid (LPA). LPA increases proliferation, prevents apoptosis and anoikis, increases invasiveness, decreases
Design and Deployment of a Pediatric Cardiac Arrest Surveillance System
Newton, Heather Marie; McNamara, Leann; Engorn, Branden Michael; Jones, Kareen; Bernier, Meghan; Dodge, Pamela; Salamone, Cheryl; Bhalala, Utpal; Jeffers, Justin M.; Engineer, Lilly; Diener-West, Marie; Hunt, Elizabeth Anne
2018-01-01
Objective We aimed to increase detection of pediatric cardiopulmonary resuscitation (CPR) events and collection of physiologic and performance data for use in quality improvement (QI) efforts. Materials and Methods We developed a workflow-driven surveillance system that leveraged organizational information technology systems to trigger CPR detection and analysis processes. We characterized detection by notification source, type, location, and year, and compared it to previous methods of detection. Results From 1/1/2013 through 12/31/2015, there were 2,986 unique notifications associated with 2,145 events, 317 requiring CPR. PICU and PEDS-ED accounted for 65% of CPR events, whereas floor care areas were responsible for only 3% of events. 100% of PEDS-OR and >70% of PICU CPR events would not have been included in QI efforts. Performance data from both defibrillator and bedside monitor increased annually. (2013: 1%; 2014: 18%; 2015: 27%). Discussion After deployment of this system, detection has increased ∼9-fold and performance data collection increased annually. Had the system not been deployed, 100% of PEDS-OR and 50–70% of PICU, NICU, and PEDS-ED events would have been missed. Conclusion By leveraging hospital information technology and medical device data, identification of pediatric cardiac arrest with an associated increased capture in the proportion of objective performance data is possible. PMID:29854451
Design and Deployment of a Pediatric Cardiac Arrest Surveillance System.
Duval-Arnould, Jordan Michel; Newton, Heather Marie; McNamara, Leann; Engorn, Branden Michael; Jones, Kareen; Bernier, Meghan; Dodge, Pamela; Salamone, Cheryl; Bhalala, Utpal; Jeffers, Justin M; Engineer, Lilly; Diener-West, Marie; Hunt, Elizabeth Anne
2018-01-01
We aimed to increase detection of pediatric cardiopulmonary resuscitation (CPR) events and collection of physiologic and performance data for use in quality improvement (QI) efforts. We developed a workflow-driven surveillance system that leveraged organizational information technology systems to trigger CPR detection and analysis processes. We characterized detection by notification source, type, location, and year, and compared it to previous methods of detection. From 1/1/2013 through 12/31/2015, there were 2,986 unique notifications associated with 2,145 events, 317 requiring CPR. PICU and PEDS-ED accounted for 65% of CPR events, whereas floor care areas were responsible for only 3% of events. 100% of PEDS-OR and >70% of PICU CPR events would not have been included in QI efforts. Performance data from both defibrillator and bedside monitor increased annually. (2013: 1%; 2014: 18%; 2015: 27%). After deployment of this system, detection has increased ∼9-fold and performance data collection increased annually. Had the system not been deployed, 100% of PEDS-OR and 50-70% of PICU, NICU, and PEDS-ED events would have been missed. By leveraging hospital information technology and medical device data, identification of pediatric cardiac arrest with an associated increased capture in the proportion of objective performance data is possible.
A New Cloud and Aerosol Layer Detection Method Based on Micropulse Lidar Measurements
NASA Astrophysics Data System (ADS)
Wang, Q.; Zhao, C.; Wang, Y.; Li, Z.; Wang, Z.; Liu, D.
2014-12-01
A new algorithm is developed to detect aerosols and clouds based on micropulse lidar (MPL) measurements. In this method, a semi-discretization processing (SDP) technique is first used to inhibit the impact of increasing noise with distance, then a value distribution equalization (VDE) method is introduced to reduce the magnitude of signal variations with distance. Combined with empirical threshold values, clouds and aerosols are detected and separated. This method can detect clouds and aerosols with high accuracy, although classification of aerosols and clouds is sensitive to the thresholds selected. Compared with the existing Atmospheric Radiation Measurement (ARM) program lidar-based cloud product, the new method detects more high clouds. The algorithm was applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu site. At SGP, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring, and shows bi-modal vertical distributions with maximum frequency at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. By contrast, the cloud frequency at Taihu shows no clear seasonal variation and the maximum frequency is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at SGP.
Detecting regional patterns of changing CO2 flux in Alaska
Parazoo, Nicholas C.; Wofsy, Steven C.; Koven, Charles D.; Sweeney, Colm; Lawrence, David M.; Lindaas, Jakob; Chang, Rachel Y.-W.; Miller, Charles E.
2016-01-01
With rapid changes in climate and the seasonal amplitude of carbon dioxide (CO2) in the Arctic, it is critical that we detect and quantify the underlying processes controlling the changing amplitude of CO2 to better predict carbon cycle feedbacks in the Arctic climate system. We use satellite and airborne observations of atmospheric CO2 with climatically forced CO2 flux simulations to assess the detectability of Alaskan carbon cycle signals as future warming evolves. We find that current satellite remote sensing technologies can detect changing uptake accurately during the growing season but lack sufficient cold season coverage and near-surface sensitivity to constrain annual carbon balance changes at regional scale. Airborne strategies that target regular vertical profile measurements within continental interiors are more sensitive to regional flux deeper into the cold season but currently lack sufficient spatial coverage throughout the entire cold season. Thus, the current CO2 observing network is unlikely to detect potentially large CO2 sources associated with deep permafrost thaw and cold season respiration expected over the next 50 y. Although continuity of current observations is vital, strategies and technologies focused on cold season measurements (active remote sensing, aircraft, and tall towers) and systematic sampling of vertical profiles across continental interiors over the full annual cycle are required to detect the onset of carbon release from thawing permafrost. PMID:27354511
Detecting regional patterns of changing CO 2 flux in Alaska
Parazoo, Nicholas C.; Commane, Roisin; Wofsy, Steven C.; ...
2016-06-27
With rapid changes in climate and the seasonal amplitude of carbon dioxide (CO 2) in the Arctic, it is critical that we detect and quantify the underlying processes controlling the changing amplitude of CO 2 to better predict carbon cycle feedbacks in the Arctic climate system. We use satellite and airborne observations of atmospheric CO 2 with climatically forced CO 2 flux simulations to assess the detectability of Alaskan carbon cycle signals as future warming evolves. We find that current satellite remote sensing technologies can detect changing uptake accurately during the growing season but lack sufficient cold season coverage andmore » near-surface sensitivity to constrain annual carbon balance changes at regional scale. Airborne strategies that target regular vertical profile measurements within continental interiors are more sensitive to regional flux deeper into the cold season but currently lack sufficient spatial coverage throughout the entire cold season. Thus, the current CO 2 observing network is unlikely to detect potentially large CO 2 sources associated with deep permafrost thaw and cold season respiration expected over the next 50 y. In conclusion, although continuity of current observations is vital, strategies and technologies focused on cold season measurements (active remote sensing, aircraft, and tall towers) and systematic sampling of vertical profiles across continental interiors over the full annual cycle are required to detect the onset of carbon release from thawing permafrost.« less
Wang, Dongxia; Krilich, Joan; Baudys, Jakub; Barr, John R.; Kalb, Suzanne R.
2015-01-01
It is essential to have a simple, quick and sensitive method for the detection and quantification of botulinum neurotoxins, the most toxic substances and the causative agents of botulism. Type C botulinum neurotoxin (BoNT/C) represents one of the seven members of distinctive BoNT serotypes (A to G) that cause botulism in animals and avians. Here we report the development of optimized peptide substrates for improving the detection of BoNT/C and /CD mosaic toxins using an Endopep-MS assay, a mass spectrometry-based method that is able to rapidly and sensitively detect and differentiate all types of BoNTs by extracting the toxin with specific antibodies and detecting the unique cleavage products of peptide substrates. Based on the sequence of a short SNAP-25 peptide, we conducted optimization through a comprehensive process including length determination, terminal modification, single and multiple amino acid residue substitution, and incorporation of unnatural amino acid residues. Our data demonstrate that an optimal peptide provides a more than 200-fold improvement over the substrate currently used in the Endopep-MS assay for the detection of BoNT/C1 and /CD mosaic. Using the new substrate in a four-hour cleavage reaction, the limit of detection for the BoNT/C1 complex spiked in buffer, serum and milk samples was determined to be 0.5, 0.5 and 1 mouseLD50/mL, respectively, representing a similar or higher sensitivity than that obtained by traditional mouse bioassay. PMID:25913863
Biomass burning contributions to urban aerosols in a coastal Mediterranean city.
Reche, C; Viana, M; Amato, F; Alastuey, A; Moreno, T; Hillamo, R; Teinilä, K; Saarnio, K; Seco, R; Peñuelas, J; Mohr, C; Prévôt, A S H; Querol, X
2012-06-15
Mean annual biomass burning contributions to the bulk particulate matter (PM(X)) load were quantified in a southern-European urban environment (Barcelona, Spain) with special attention to typical Mediterranean winter and summer conditions. In spite of the complexity of the local air pollution cocktail and the expected low contribution of biomass burning emissions to PM levels in Southern Europe, the impact of these emissions was detected at an urban background site by means of tracers such as levoglucosan, K(+) and organic carbon (OC). The significant correlation between levoglucosan and OC (r(2)=0.77) and K(+) (r(2)=0.65), as well as a marked day/night variability of the levoglucosan levels and levoglucosan/OC ratios was indicative of the contribution from regional scale biomass burning emissions during night-time transported by land breezes. In addition, on specific days (21-22 March), the contribution from long-range transported biomass burning aerosols was detected. Quantification of the contribution of biomass burning aerosols to PM levels on an annual basis was possible by means of the Multilinear Engine (ME). Biomass burning emissions accounted for 3% of PM(10) and PM(2.5) (annual mean), while this percentage increased up to 5% of PM(1). During the winter period, regional-scale biomass burning emissions (agricultural waste burning) were estimated to contribute with 7±4% of PM(2.5) aerosols during night-time (period when emissions were clearly detected). Long-range transported biomass burning aerosols (possibly from forest fires and/or agricultural waste burning) accounted for 5±2% of PM(2.5) during specific episodes. Annually, biomass burning emissions accounted for 19%-21% of OC levels in PM(10), PM(2.5) and PM(1). The contribution of this source to K(+) ranged between 48% for PM(10) and 97% for PM(1) (annual mean). Results for K(+) from biomass burning evidenced that this tracer is mostly emitted in the fine fraction, and thus coarse K(+) could not be taken as an appropriate tracer of biomass burning. Copyright © 2012 Elsevier B.V. All rights reserved.
[Early detection on the onset of scarlet fever epidemics in Beijing, using the Cumulative Sum].
Li, Jing; Yang, Peng; Wu, Shuang-sheng; Wang, Xiao-li; Liu, Shuang; Wang, Quan-yi
2013-05-01
Based on data related to scarlet fever which was collected from the Disease Surveillance Information Reporting System in Beijing from 2005 to 2011, to explore the efficiency of Cumulative Sum (CUSUM) in detecting the onset of scarlet fever epidemics. Models as C1-MILD (C1), C2-MEDIUM (C2) and C3-ULTRA (C3) were used. Tools for evaluation as Youden's index and detection time were calculated to optimize the parameters and optimal model. Data on 2011 scarlet fever surveillance was used to verify the efficacy of these models. C1 (k = 0.5, H = 2σ), C2 (k = 0.7, H = 2σ), C3 (k = 1.1, H = 2σ) appeared to be the optimal parameters among these models. Youden's index of C1 was 83.0% and detection time being 0.64 weeks, Youden's index of C2 was 85.4% and detection time being 1.27 weeks, Youden's index of C1 was 85.1% and detection time being 1.36 weeks. Among the three early warning detection models, C1 had the highest efficacy. Three models all triggered the signals within 4 weeks after the onset of scarlet fever epidemics. The early warning detection model of CUSUM could be used to detect the onset of scarlet fever epidemics, with good efficacy.
NASA Astrophysics Data System (ADS)
Shen, S. S.
2015-12-01
This presentation describes the detection of interdecadal climate signals in a newly reconstructed precipitation data from 1850-present. Examples are on precipitation signatures of East Asian Monsoon (EAM), Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillations (AMO). The new reconstruction dataset is an enhanced edition of a suite of global precipitation products reconstructed by Spectral Optimal Gridding of Precipitation Version 1.0 (SOGP 1.0). The maximum temporal coverage is 1850-present and the spatial coverage is quasi-global (75S, 75N). This enhanced version has three different temporal resolutions (5-day, monthly, and annual) and two different spatial resolutions (2.5 deg and 5.0 deg). It also has a friendly Graphical User Interface (GUI). SOGP uses a multivariate regression method using an empirical orthogonal function (EOF) expansion. The Global Precipitation Climatology Project (GPCP) precipitation data from 1981-20010 are used to calculate the EOFs. The Global Historical Climatology Network (GHCN) gridded data are used to calculate the regression coefficients for reconstructions. The sampling errors of the reconstruction are analyzed according to the number of EOF modes used in the reconstruction. Our reconstructed 1900-2011 time series of the global average annual precipitation shows a 0.024 (mm/day)/100a trend, which is very close to the trend derived from the mean of 25 models of the CMIP5 (Coupled Model Intercomparison Project Phase 5). Our reconstruction has been validated by GPCP data after 1979. Our reconstruction successfully displays the 1877 El Nino (see the attached figure), which is considered a validation before 1900. Our precipitation products are publically available online, including digital data, precipitation animations, computer codes, readme files, and the user manual. This work is a joint effort of San Diego State University (Sam Shen, Gregori Clarke, Christian Junjinger, Nancy Tafolla, Barbara Sperberg, and Melanie Thorn), UCLA (Yongkang Xue), and University of Maryland (Tom Smith and Phil Arkin) and supported in part by the U.S. National Science Foundation (Awards No. AGS-1419256 and AGS-1015957).
Inter-annual variability and trend detection of urban CO2, CH4 and CO emissions
NASA Astrophysics Data System (ADS)
Lauvaux, T.; Deng, A.; Gurney, K. R.; Nathan, B.; Ye, X.; Oda, T.; Karion, A.; Hardesty, M.; Harvey, R. M.; Richardson, S.; Whetstone, J. R.; Hutyra, L.; Davis, K. J.; Brewer, A.; Gaudet, B. J.; Turnbull, J. C.; Sweeney, C.; Shepson, P. B.; Miles, N.; Bonin, T.; Wu, K.; Balashov, N. V.
2017-12-01
The Indianapolis Flux (INFLUX) Experiment has conducted an unprecedented volume of atmospheric greenhouse gas measurements across the Indianapolis metropolitan area from aircraft, remote-sensing, and tower-based observational platforms. Assimilated in a high-resolution urban inversion system, atmospheric data provide an independent constraint to existing emission products, directly supporting the integration of economic data into urban emission systems. We present here the first multi-year assessment of carbon dioxide (CO2), methane (CH4), and carbon monoxide (CO) emissions from anthropogenic activities in comparison to multiple bottom-up emission products. Biogenic CO2 fluxes are quantified using an optimized biogeochemical model at high resolution, further refined within the atmospheric inversion system. We also present the first sector-based inversion by jointly assimilating CO2 and CO mixing ratios to quantify the dominant sectors of emissions over the entire period (2012-2015). The detected trend in CO2 emissions over 2012-2015 from both bottom-up emission products and tower-based inversions agree within a few percent, with a decline in city emissions over the 3-year time period. Major changes occur at the primary power plant, suggesting a decrease in energy production within the city limits. The joint assimilation of CO2 and CO mixing ratios confirms the absence of trends in other sectors. However, top-down and bottom-up approaches tend to disagree annually, with a decline in urban emissions suggested by atmospheric data in 2014 that is several months earlier than is observed in the bottom-up products. Concerning CH4 emissions, the inversion shows a decrease since mid-2014 which may be due to lower landfill emissions or lower energy consumption (from coal and natural gas). This first demonstration of a high-accuracy long-term greenhouse gas measurement network merged with a high-resolution bottom-up information system highlights the potential for informing and supporting policy makers on the successful implementation of emission reduction targets. We show here how the combination of information sources supports the evaluation of mitigation policies and helps development of understanding regarding the mechanisms driving emission trends at the level of economical sectors.
Brader, Hilary Smolen; Ying, Gui-shuang; Martin, E. Revell; Maguire, Maureen G.
2013-01-01
Objective To characterize the size, location, conformation, and features of incident geographic atrophy (GA) as detected by annual stereoscopic color photographs and fluorescein angiograms (FAs). Design Retrospective cohort study within a larger clinical trial Participants Patients with bilateral large drusen who developed GA during the course of the Complications of Age-related Macular Degeneration Prevention Trial (CAPT). Methods Annual stereoscopic color photographs and FAs were reviewed from 114 CAPT patients who developed GA in the untreated eye during 5-6 years of follow-up. Geographic atrophy was defined according to the Revised GA Criteria for identifying early GA23. Color-optimized fundus photographs were viewed concurrently with the FAs during grading. Main Outcome Measures Size and distance from the fovea of individual GA lesions, number of areas of atrophy, and change in visual acuity (VA) when GA first developed in an eye. Results At presentation, the median total GA area was 0.26mm2 (0.1 Disc area). GA presented as a single lesion in 89 (78%) of eyes. The median distance from the fovea was 395μm. Twenty percent of incident GA lesions were subfoveal and an additional 18% were within 250μm of the foveal center. Development of GA was associated with a mean decrease of 7 letters from the baseline visual acuity level compared to 1 letter among matched early age-related macular degeneration (AMD) eyes without GA. GA that formed in areas previously occupied by drusenoid pigment epithelial detachments (DPED) were on average larger (0.53 vs. 0.20 mm2; p=0.0001), more central (50 vs. 500 microns from the center of the fovea; p<0.0001), and associated with significantly worse visual outcome (20/50 vs. 20/25; p=0.0003) than GA with other drusen types as precursors. Conclusions Incident geographic atrophy most often appears on color fundus photographs and fluorescein angiograms as a small, singular, parafoveal lesion, though a large minority of lesions are subfoveal or multifocal at initial detection. The characteristics of incident geographic atrophy vary with precursor drusen types. These data can facilitate design of future clinical trials of therapies for GA. PMID:23622873
NASA Astrophysics Data System (ADS)
Benyon, Richard G.; Lane, Patrick N. J.; Jaskierniak, Dominik; Kuczera, George; Haydon, Shane R.
2015-07-01
Mean sapwood thickness, measured in fifteen 73 year old Eucalyptus regnans and E. delegatensis stands, correlated strongly with forest overstorey stocking density (R2 0.72). This curvilinear relationship was used with routine forest stocking density and basal area measurements to estimate sapwood area of the forest overstorey at various times in 15 research catchments in undisturbed and disturbed forests located in the Great Dividing Range, Victoria, Australia. Up to 45 years of annual precipitation and streamflow data available from the 15 catchments were used to examine relationships between mean annual loss (evapotranspiration estimated as mean annual precipitation minus mean annual streamflow), and sapwood area. Catchment mean sapwood area correlated strongly (R2 0.88) with catchment mean annual loss. Variation in sapwood area accounted for 68% more variation in mean annual streamflow than precipitation alone (R2 0.90 compared with R2 0.22). Changes in sapwood area accounted for 96% of the changes in mean annual loss observed after forest thinning or clear-cutting and regeneration. We conclude that forest inventory data can be used reliably to predict spatial and temporal variation in catchment annual losses and streamflow in response to natural and imposed disturbances in even-aged forests. Consequently, recent advances in mapping of sapwood area using airborne light detection and ranging will enable high resolution spatial and temporal mapping of mean annual loss and mean annual streamflow over large areas of forested catchment. This will be particularly beneficial in management of water resources from forested catchments subject to disturbance but lacking reliable long-term (years to decades) streamflow records.
DQE and system optimization for indirect-detection flat-panel imagers in diagnostic radiology
NASA Astrophysics Data System (ADS)
Siewerdsen, Jeffrey H.; Antonuk, Larry E.
1998-07-01
The performance of indirect-detection flat-panel imagers incorporating CsI:Tl x-ray converters is examined through calculation of the detective quantum efficiency (DQE) under conditions of chest radiography, fluoroscopy, and mammography. Calculations are based upon a cascaded systems model which has demonstrated excellent agreement with empirical signal, noise- power spectra, and DQE results. For each application, the DQE is calculated as a function of spatial-frequency and CsI:Tl thickness. A preliminary investigation into the optimization of flat-panel imaging systems is described, wherein the x-ray converter thickness which provides optimal DQE for a given imaging task is estimated. For each application, a number of example tasks involving detection of an object of variable size and contrast against a noisy background are considered. The method described is fairly general and can be extended to account for a variety of imaging tasks. For the specific examples considered, the preliminary results estimate optimal CsI:Tl thicknesses of approximately 450 micrometer (approximately 200 mg/cm2), approximately 320 micrometer (approximately 140 mg/cm2), and approximately 200 micrometer (approximately 90 mg/cm2) for chest radiography, fluoroscopy, and mammography, respectively. These results are expected to depend upon the imaging task as well as upon the quality of available CsI:Tl, and future improvements in scintillator fabrication could result in increased optimal thickness and DQE.
Design optimization of Cassegrain telescope for remote explosive trace detection
NASA Astrophysics Data System (ADS)
Bhavsar, Kaushalkumar; Eseller, K. E.; Prabhu, Radhakrishna
2017-10-01
The past three years have seen a global increase in explosive-based terror attacks. The widespread use of improvised explosives and anti-personnel landmines have caused thousands of civilian casualties across the world. Current scenario of globalized civilization threat from terror drives the need to improve the performance and capabilities of standoff explosive trace detection devices to be able to anticipate the threat from a safe distance to prevent explosions and save human lives. In recent years, laser-induced breakdown spectroscopy (LIBS) is an emerging approach for material or elemental investigations. All the principle elements on the surface are detectable in a single measurement using LIBS and hence, a standoff LIBS based method has been used to remotely detect explosive traces from several to tens of metres distance. The most important component of LIBS based standoff explosive trace detection system is the telescope which enables remote identification of chemical constituents of the explosives. However, in a compact LIBS system where Cassegrain telescope serves the purpose of laser beam delivery and light collection, need a design optimization of the telescope system. This paper reports design optimization of a Cassegrain telescope to detect explosives remotely for LIBS system. A design optimization of Schmidt corrector plate was carried out for Nd:YAG laser. Effect of different design parameters was investigated to eliminate spherical aberration in the system. Effect of different laser wavelengths on the Schmidt corrector design was also investigated for the standoff LIBS system.
Setting the magic angle for fast magic-angle spinning probes.
Penzel, Susanne; Smith, Albert A; Ernst, Matthias; Meier, Beat H
2018-06-15
Fast magic-angle spinning, coupled with 1 H detection is a powerful method to improve spectral resolution and signal to noise in solid-state NMR spectra. Commercial probes now provide spinning frequencies in excess of 100 kHz. Then, one has sufficient resolution in the 1 H dimension to directly detect protons, which have a gyromagnetic ratio approximately four times larger than 13 C spins. However, the gains in sensitivity can quickly be lost if the rotation angle is not set precisely. The most common method of magic-angle calibration is to optimize the number of rotary echoes, or sideband intensity, observed on a sample of KBr. However, this typically uses relatively low spinning frequencies, where the spinning of fast-MAS probes is often unstable, and detection on the 13 C channel, for which fast-MAS probes are typically not optimized. Therefore, we compare the KBr-based optimization of the magic angle with two alternative approaches: optimization of the splitting observed in 13 C-labeled glycine-ethylester on the carbonyl due to the Cα-C' J-coupling, or optimization of the H-N J-coupling spin echo in the protein sample itself. The latter method has the particular advantage that no separate sample is necessary for the magic-angle optimization. Copyright © 2018. Published by Elsevier Inc.
Throughput Optimization of Continuous Biopharmaceutical Manufacturing Facilities.
Garcia, Fernando A; Vandiver, Michael W
2017-01-01
In order to operate profitably under different product demand scenarios, biopharmaceutical companies must design their facilities with mass output flexibility in mind. Traditional biologics manufacturing technologies pose operational challenges in this regard due to their high costs and slow equipment turnaround times, restricting the types of products and mass quantities that can be processed. Modern plant design, however, has facilitated the development of lean and efficient bioprocessing facilities through footprint reduction and adoption of disposable and continuous manufacturing technologies. These development efforts have proven to be crucial in seeking to drastically reduce the high costs typically associated with the manufacturing of recombinant proteins. In this work, mathematical modeling is used to optimize annual production schedules for a single-product commercial facility operating with a continuous upstream and discrete batch downstream platform. Utilizing cell culture duration and volumetric productivity as process variables in the model, and annual plant throughput as the optimization objective, 3-D surface plots are created to understand the effect of process and facility design on expected mass output. The model shows that once a plant has been fully debottlenecked it is capable of processing well over a metric ton of product per year. Moreover, the analysis helped to uncover a major limiting constraint on plant performance, the stability of the neutralized viral inactivated pool, which may indicate that this should be a focus of attention during future process development efforts. LAY ABSTRACT: Biopharmaceutical process modeling can be used to design and optimize manufacturing facilities and help companies achieve a predetermined set of goals. One way to perform optimization is by making the most efficient use of process equipment in order to minimize the expenditure of capital, labor and plant resources. To that end, this paper introduces a novel mathematical algorithm used to determine the most optimal equipment scheduling configuration that maximizes the mass output for a facility producing a single product. The paper also illustrates how different scheduling arrangements can have a profound impact on the availability of plant resources, and identifies limiting constraints on the plant design. In addition, simulation data is presented using visualization techniques that aid in the interpretation of the scientific concepts discussed. © PDA, Inc. 2017.
Hupp, Jerry W.; Hodges, John I.; Conant, Bruce P.; Meixell, Brandt W.; Groves, Debbie J.
2010-01-01
Management of Pacific Flyway Canada geese (Branta canadensis) requires information on winter distribution of different populations. Recoveries of tarsus bands from Vancouver Canada geese (B. canadensis fulva) marked in southeast Alaska, USA, ≥4 decades ago suggested that ≥83% of the population was non-migratory and that annual adult survival was high (Ŝ = 0.836). However, recovery distribution of tarsus bands was potentially biased due to geographic differences in harvest intensity in the Pacific Flyway. Also, winter distribution of Vancouver Canada geese could have shifted since the 1960s, as has occurred for some other populations of Canada geese. Because winter distribution and annual survival of this population had not recently been evaluated, we surgically implanted very high frequency radiotransmitters in 166 adult female Canada geese in southeast Alaska. We captured Vancouver Canada geese during molt at 2 sites where adults with goslings were present (breeding areas) and 2 sites where we observed nonbreeding birds only. During winter radiotracking flights in southeast Alaska, we detected 98% of 85 females marked at breeding areas and 83% of 70 females marked at nonbreeding sites, excluding 11 females that died prior to the onset of winter radiotracking. We detected no radiomarked females in coastal British Columbia, or western Washington and Oregon, USA. Most (70%) females moved ≤30 km between November and March. Our model-averaged estimate of annual survival (Ŝ = 0.844, SE = 0.050) was similar to the estimate of annual survival of geese marked from 1956 to 1960. Likely <2% of Vancouver Canada geese that nest in southeast Alaska migrate to winter areas in Oregon or Washington where they could intermix with Canada geese from other populations in the Pacific Flyway. Because annual survival of adult Vancouver Canada geese was high and showed evidence of long-term consistency, managers should examine how reproductive success and recruitment may affect the population.
NASA Astrophysics Data System (ADS)
Masoumi, S.; Safari, A.; Sharifi, M.; Sam Khaniani, A.
2011-12-01
In order to investigate regular variations of the ionosphere, the least-squares harmonic estimation is applied to the time series of ionospheric electron densities in the region of Iran derived from about five years of Global Positioning System Radio Occultation (GPS RO) observations by FORMOSAT-3/COSMIC satellites. Although the obtained results are slightly different from the expected ones due to the low horizontal resolution of RO measurements, high vertical resolution of the observations enables us to detect not only the Total Electron Content (TEC) variations, but also periodic patterns of electron densities in different altitudes of the ionosphere. Dominant diurnal and annual signals, together with their Fourier series decompositions, and also periods close to 27 days are obtained, which is consistent with the previous analyses on TEC. In the equatorial anomaly band, the annual component is weaker than its Fourier decomposition periods. In particular, the semiannual period dominates the annual component, which is probably due to the effect of geomagnetic field. By the investigation of the frequencies at different local times, the semiannual signal is more significant than the annual one in the daytime, while the annual frequency is dominant at night. By the detection of the phases of the components, it is revealed that the annual signal has its maximum in summer at high altitudes, and in winter at lower altitudes. This suggests the effect of neutral compositions in the lower atmosphere. Further, the semiannual component peaks around equinox during the day, while its maximum mostly occurs in solstice at night. Since RO measurements can be used to derive TEC along the signal path between a GPS satellite and a receiver, study on the potentiality of using these observations for the prediction of electron densities and its application to the ionospheric correction of the single frequency receivers is suggested.
CCR Magazines | Center for Cancer Research
The Center for Cancer Research (CCR) has two magazines, MILESTONES and LANDMARKS, that highlight our annual advances and top contributions to the understanding, detection, treatment and prevention of cancer over the years.
ERIC Educational Resources Information Center
Hazelwood, R. Jordan; Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie
2017-01-01
Purpose: The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method: This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived…
1991-01-24
Molecular Graphics, vol. 6, No. 4 (Dec. 1988), p. 223. Turk, Greg, "Interactive Collision Detection for Molecular Graphics," M.S. thesis , UNC-Chapel Hill...Problem," Master’s thesis , UNC Department of Computer Science Technical Report #TR87-013, May 1987. Pique, ME., "Technical Trends in Molecular Graphics...AD-A236 598 Seventeenth Annual Progress Report and 1992-97 Renewal Proposal Interactive Graphics for Molecular Studies TR91-020 January 24, 1991 red
2000-10-01
oral presentation at the 1 998 AACR meeting and on the subsequent Cancer Research article, many investigators in the field have abandoned K19 as a...1999) "Comparison of Intradermal and Subcutaneous Injections in Lymphatic Mapping." Oral presentation_at the 33rd Annual Meeting of the Association of...Lannin D, Tafra L. Comparison of Intradermal and Subcutaneous Injections in Lymphatic Mapping, Oral presentation at the 33rd Annual Meeting of the
1991-07-23
and H. Schroeder (eds.), Proceedings of the International Fechner Symposium. Am- sterdam: North Holland. VanZandt, T. and Townsend, J.R. (Submitted...Smith, L. B. (Forthcoming). A connectionist model of the development of the notion of sameness. Thirteenth Annual Conference of the Cognitive Science...1987). A detection theory method for the analysis of visual and auditory displays. Proceedings of the 31st Annual Meeting of the Human Factors
Exploring the Thermal Limits of IR-Based Automatic Whale Detection
2014-09-30
spouts during the northward humpback whale migration, which occurs annually rather close to shore near North Stradbroke Island, Queensland, Australia...with concurrent visual observations. APPROACH By obtaining continuous IR video footage during two successive northward humpback whale ... Whale Detection (ETAW) Olaf Boebel P.O. Box 120161 27515 Bremerhaven GERMANY phone: +49 (471) 4831-1879 fax: +49 (471) 4831-1797 email
Aerial survey methodology for bison population estimation in Yellowstone National Park
Hess, Steven C.
2002-01-01
I developed aerial survey methods for statistically rigorous bison population estimation in Yellowstone National Park to support sound resource management decisions and to understand bison ecology. Survey protocols, data recording procedures, a geographic framework, and seasonal stratifications were based on field observations from February 1998-September 2000. The reliability of this framework and strata were tested with long-term data from 1970-1997. I simulated different sample survey designs and compared them to high-effort censuses of well-defined large areas to evaluate effort, precision, and bias. Sample survey designs require much effort and extensive information on the current spatial distribution of bison and therefore do not offer any substantial reduction in time and effort over censuses. I conducted concurrent ground surveys, or 'double sampling' to estimate detection probability during aerial surveys. Group size distribution and habitat strongly affected detection probability. In winter, 75% of the groups and 92% of individual bison were detected on average from aircraft, while in summer, 79% of groups and 97% of individual bison were detected. I also used photography to quantify the bias due to counting large groups of bison accurately and found that undercounting increased with group size and could reach 15%. I compared survey conditions between seasons and identified optimal time windows for conducting surveys in both winter and summer. These windows account for the habitats and total area bison occupy, and group size distribution. Bison became increasingly scattered over the Yellowstone region in smaller groups and more occupied unfavorable habitats as winter progressed. Therefore, the best conditions for winter surveys occur early in the season (Dec-Jan). In summer, bison were most spatially aggregated and occurred in the largest groups by early August. Low variability between surveys and high detection probability provide population estimates with an overall coefficient of variation of approximately 8% and have high power for detecting trends in population change. I demonstrated how population estimates from winter and summer can be integrated into a comprehensive monitoring program to estimate annual growth rates, overall winter mortality, and an index of calf production, requiring about 30 hours of flight per year.
Outcome Assessments and Cost Avoidance of an Oral Chemotherapy Management Clinic.
Wong, Siu-Fun; Bounthavong, Mark; Nguyen, Cham P; Chen, Timothy
2016-03-01
Increasing use of oral chemotherapy drugs increases the challenges for drug and patient management. An oral chemotherapy management clinic was developed to provide patients with oral chemotherapy management, concurrent medication (CM) education, and symptom management services. This evaluation aims to measure the need and effectiveness of this practice model due to scarce published data. This is a case series report of all patients referred to the oral chemotherapy management clinic. Data collected included patient demographics, depression scores, CMs, and types of intervention, including detection and management outcomes collected at baseline, 3-day, 7-day, and 3-month follow-ups. Persistence rate was monitored. Secondary analysis assessed potential cost avoidance. A total of 86 evaluated patients (32 men and 54 women, mean age of 63.4 years) did not show a high risk for medication nonadherence. The 3 most common cancer diagnoses were rectal, pancreatic, and breast, with capecitabine most prescribed. Patients had an average of 13.7 CMs. A total of 125 interventions (detection and management of adverse drug event detection, compliance, drug interactions, medication error, and symptom management) occurred in 201 visits, with more than 75% of interventions occurring within the first 14 days. A persistence rate was observed in 78% of 41 evaluable patients. The total estimated annual cost avoidance per 1.0 full time employee (FTE) was $125,761.93. This evaluation demonstrated the need for additional support for patients receiving oral chemotherapy within standard of care medical service. A comprehensive oral chemotherapy management referral service can optimize patient care delivery via early interventions for adverse drug events, drug interactions, and medication errors up to 3 months after initiation of treatment. Copyright © 2016 by the National Comprehensive Cancer Network.
Zipkin, Elise F; Grant, Evan H Campbell; Fagan, William F
2012-10-01
The ability to accurately predict patterns of species' occurrences is fundamental to the successful management of animal communities. To determine optimal management strategies, it is essential to understand species-habitat relationships and how species habitat use is related to natural or human-induced environmental changes. Using five years of monitoring data in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA, we developed four multispecies hierarchical models for estimating amphibian wetland use that account for imperfect detection during sampling. The models were designed to determine which factors (wetland habitat characteristics, annual trend effects, spring/summer precipitation, and previous wetland occupancy) were most important for predicting future habitat use. We used the models to make predictions about species occurrences in sampled and unsampled wetlands and evaluated model projections using additional data. Using a Bayesian approach, we calculated a posterior distribution of receiver operating characteristic area under the curve (ROC AUC) values, which allowed us to explicitly quantify the uncertainty in the quality of our predictions and to account for false negatives in the evaluation data set. We found that wetland hydroperiod (the length of time that a wetland holds water), as well as the occurrence state in the prior year, were generally the most important factors in determining occupancy. The model with habitat-only covariates predicted species occurrences well; however, knowledge of wetland use in the previous year significantly improved predictive ability at the community level and for two of 12 species/species complexes. Our results demonstrate the utility of multispecies models for understanding which factors affect species habitat use of an entire community (of species) and provide an improved methodology using AUC that is helpful for quantifying the uncertainty in model predictions while explicitly accounting for detection biases.
Zipkin, Elise F.; Grant, Evan H. Campbell; Fagan, William F.
2012-01-01
The ability to accurately predict patterns of species' occurrences is fundamental to the successful management of animal communities. To determine optimal management strategies, it is essential to understand species-habitat relationships and how species habitat use is related to natural or human-induced environmental changes. Using five years of monitoring data in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA, we developed four multi-species hierarchical models for estimating amphibian wetland use that account for imperfect detection during sampling. The models were designed to determine which factors (wetland habitat characteristics, annual trend effects, spring/summer precipitation, and previous wetland occupancy) were most important for predicting future habitat use. We used the models to make predictions of species occurrences in sampled and unsampled wetlands and evaluated model projections using additional data. Using a Bayesian approach, we calculated a posterior distribution of receiver operating characteristic area under the curve (ROC AUC) values, which allowed us to explicitly quantify the uncertainty in the quality of our predictions and to account for false negatives in the evaluation dataset. We found that wetland hydroperiod (the length of time that a wetland holds water) as well as the occurrence state in the prior year were generally the most important factors in determining occupancy. The model with only habitat covariates predicted species occurrences well; however, knowledge of wetland use in the previous year significantly improved predictive ability at the community level and for two of 12 species/species complexes. Our results demonstrate the utility of multi-species models for understanding which factors affect species habitat use of an entire community (of species) and provide an improved methodology using AUC that is helpful for quantifying the uncertainty in model predictions while explicitly accounting for detection biases.
Tang, Xianyan; Geater, Alan; McNeil, Edward; Deng, Qiuyun; Dong, Aihu; Zhong, Ge
2017-04-04
Outbreaks of measles re-emerged in Guangxi province during 2013-2014, where measles again became a major public health concern. A better understanding of the patterns of measles cases would help in identifying high-risk areas and periods for optimizing preventive strategies, yet these patterns remain largely unknown. Thus, this study aimed to determine the patterns of measles clusters in space, time and space-time at the county level over the period 2004-2014 in Guangxi. Annual data on measles cases and population sizes for each county were obtained from Guangxi CDC and Guangxi Bureau of Statistics, respectively. Epidemic curves and Kulldorff's temporal scan statistics were used to identify seasonal peaks and high-risk periods. Tango's flexible scan statistics were implemented to determine irregular spatial clusters. Spatio-temporal clusters in elliptical cylinder shapes were detected by Kulldorff's scan statistics. Population attributable risk percent (PAR%) of children aged ≤24 months was used to identify regions with a heavy burden of measles. Seasonal peaks occurred between April and June, and a temporal measles cluster was detected in 2014. Spatial clusters were identified in West, Southwest and North Central Guangxi. Three phases of spatio-temporal clusters with high relative risk were detected: Central Guangxi during 2004-2005, Midwest Guangxi in 2007, and West and Southwest Guangxi during 2013-2014. Regions with high PAR% were mainly clustered in West, Southwest, North and Central Guangxi. A temporal uptrend of measles incidence existed in Guangxi between 2010 and 2014, while downtrend during 2004-2009. The hotspots shifted from Central to West and Southwest Guangxi, regions overburdened with measles. Thus, intensifying surveillance of timeliness and completeness of routine vaccination and implementing supplementary immunization activities for measles should prioritized in these regions.
Xiao, Feng; Kong, Lingjiang; Chen, Jian
2017-06-01
A rapid-search algorithm to improve the beam-steering efficiency for a liquid crystal optical phased array was proposed and experimentally demonstrated in this paper. This proposed algorithm, in which the value of steering efficiency is taken as the objective function and the controlling voltage codes are considered as the optimization variables, consisted of a detection stage and a construction stage. It optimized the steering efficiency in the detection stage and adjusted its search direction adaptively in the construction stage to avoid getting caught in a wrong search space. Simulations had been conducted to compare the proposed algorithm with the widely used pattern-search algorithm using criteria of convergence rate and optimized efficiency. Beam-steering optimization experiments had been performed to verify the validity of the proposed method.
Automatic threshold optimization in nonlinear energy operator based spike detection.
Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M
2016-08-01
In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.
NASA Astrophysics Data System (ADS)
Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan
2018-04-01
Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.
Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.
Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng
2015-01-01
Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells. PMID:26066315
Malmstrom, Carolyn M; Butterfield, H Scott; Planck, Laura; Long, Christopher W; Eviner, Valerie T
2017-01-01
Invasive weeds threaten the biodiversity and forage productivity of grasslands worldwide. However, management of these weeds is constrained by the practical difficulty of detecting small-scale infestations across large landscapes and by limits in understanding of landscape-scale invasion dynamics, including mechanisms that enable patches to expand, contract, or remain stable. While high-end hyperspectral remote sensing systems can effectively map vegetation cover, these systems are currently too costly and limited in availability for most land managers. We demonstrate application of a more accessible and cost-effective remote sensing approach, based on simple aerial imagery, for quantifying weed cover dynamics over time. In California annual grasslands, the target communities of interest include invasive weedy grasses (Aegilops triuncialis and Elymus caput-medusae) and desirable forage grass species (primarily Avena spp. and Bromus spp.). Detecting invasion of annual grasses into an annual-dominated community is particularly challenging, but we were able to consistently characterize these two communities based on their phenological differences in peak growth and senescence using maximum likelihood supervised classification of imagery acquired twice per year (in mid- and end-of season). This approach permitted us to map weed-dominated cover at a 1-m scale (correctly detecting 93% of weed patches across the landscape) and to evaluate weed cover change over time. We found that weed cover was more pervasive and persistent in management units that had no significant grazing for several years than in those that were grazed, whereas forage cover was more abundant and stable in the grazed units. This application demonstrates the power of this method for assessing fine-scale vegetation transitions across heterogeneous landscapes. It thus provides means for small-scale early detection of invasive species and for testing fundamental questions about landscape dynamics.
NASA Astrophysics Data System (ADS)
Sa'adi, Zulfaqar; Shahid, Shamsuddin; Ismail, Tarmizi; Chung, Eun-Sung; Wang, Xiao-Jun
2017-11-01
This study assesses the spatial pattern of changes in rainfall extremes of Sarawak in recent years (1980-2014). The Mann-Kendall (MK) test along with modified Mann-Kendall (m-MK) test, which can discriminate multi-scale variability of unidirectional trend, was used to analyze the changes at 31 stations. Taking account of the scaling effect through eliminating the effect of autocorrelation, m-MK was employed to discriminate multi-scale variability of the unidirectional trends of the annual rainfall in Sarawak. It can confirm the significance of the MK test. The annual rainfall trend from MK test showed significant changes at 95% confidence level at five stations. The seasonal trends from MK test indicate an increasing rate of rainfall during the Northeast monsoon and a decreasing trend during the Southwest monsoon in some region of Sarawak. However, the m-MK test detected an increasing trend in annual rainfall only at one station and no significant trend in seasonal rainfall at any stations. The significant increasing trends of the 1-h maximum rainfall from the MK test are detected mainly at the stations located in the urban area giving concern to the occurrence of the flash flood. On the other hand, the m-MK test detected no significant trend in 1- and 3-h maximum rainfalls at any location. On the contrary, it detected significant trends in 6- and 72-h maximum rainfalls at a station located in the Lower Rajang basin area which is an extensive low-lying agricultural area and prone to stagnant flood. These results indicate that the trends in rainfall and rainfall extremes reported in Malaysia and surrounding region should be verified with m-MK test as most of the trends may result from scaling effect.
Trend detection in river flow indices in Poland
NASA Astrophysics Data System (ADS)
Piniewski, Mikołaj; Marcinkowski, Paweł; Kundzewicz, Zbigniew W.
2018-02-01
The issue of trend detection in long time series of river flow records is of vast theoretical interest and considerable practical relevance. Water management is based on the assumption of stationarity; hence, it is crucial to check whether taking this assumption is justified. The objective of this study is to analyse long-term trends in selected river flow indices in small- and medium-sized catchments with relatively unmodified flow regime (semi-natural catchments) in Poland. The examined indices describe annual and seasonal average conditions as well as annual extreme conditions—low and high flows. The special focus is on the spatial analysis of trends, carried out on a comprehensive, representative data set of flow gauges. The present paper is timely, as no spatially comprehensive studies (i.e. covering the entire Poland or its large parts) on trend detection in time series of river flow have been done in the recent 15 years or so. The results suggest that there is a strong random component in the river flow process, the changes are weak and the spatial pattern is complex. Yet, the results of trend detection in different indices of river flow in Poland show that there exists a spatial divide that seems to hold quite generally for various indices (annual, seasonal, as well as low and high flow). Decreases of river flow dominate in the northern part of the country and increases usually in the southern part. Stations in the central part show mostly `no trend' results. However, the spatial gradient is apparent only for the data for the period 1981-2016 rather than for 1956-2016. It seems also that the magnitude of increases of river flow is generally lower than that of decreases.
Butterfield, H. Scott; Planck, Laura; Long, Christopher W.; Eviner, Valerie T.
2017-01-01
Invasive weeds threaten the biodiversity and forage productivity of grasslands worldwide. However, management of these weeds is constrained by the practical difficulty of detecting small-scale infestations across large landscapes and by limits in understanding of landscape-scale invasion dynamics, including mechanisms that enable patches to expand, contract, or remain stable. While high-end hyperspectral remote sensing systems can effectively map vegetation cover, these systems are currently too costly and limited in availability for most land managers. We demonstrate application of a more accessible and cost-effective remote sensing approach, based on simple aerial imagery, for quantifying weed cover dynamics over time. In California annual grasslands, the target communities of interest include invasive weedy grasses (Aegilops triuncialis and Elymus caput-medusae) and desirable forage grass species (primarily Avena spp. and Bromus spp.). Detecting invasion of annual grasses into an annual-dominated community is particularly challenging, but we were able to consistently characterize these two communities based on their phenological differences in peak growth and senescence using maximum likelihood supervised classification of imagery acquired twice per year (in mid- and end-of season). This approach permitted us to map weed-dominated cover at a 1-m scale (correctly detecting 93% of weed patches across the landscape) and to evaluate weed cover change over time. We found that weed cover was more pervasive and persistent in management units that had no significant grazing for several years than in those that were grazed, whereas forage cover was more abundant and stable in the grazed units. This application demonstrates the power of this method for assessing fine-scale vegetation transitions across heterogeneous landscapes. It thus provides means for small-scale early detection of invasive species and for testing fundamental questions about landscape dynamics. PMID:29016604
[Geographical distribution of the Serum creatinine reference values of healthy adults].
Wei, De-Zhi; Ge, Miao; Wang, Cong-Xia; Lin, Qian-Yi; Li, Meng-Jiao; Li, Peng
2016-11-20
To explore the relationship between serum creatinine (Scr) reference values in healthy adults and geographic factors and provide evidence for establishing Scr reference values in different regions. We collected 29 697 Scr reference values from healthy adults measured by 347 medical facilities from 23 provinces, 4 municipalities and 5 autonomous regions. We chose 23 geographical factors and analyzed their correlation with Scr reference values to identify the factors correlated significantly with Scr reference values. According to the Principal component analysis and Ridge regression analysis, two predictive models were constructed and the optimal model was chosen after comparison of the two model's fitting degree of predicted results and measured results. The distribution map of Scr reference values was drawn using the Kriging interpolation method. Seven geographic factors, including latitude, annual sunshine duration, annual average temperature, annual average relative humidity, annual precipitation, annual temperature range and topsoil (silt) cation exchange capacity were found to correlate significantly with Scr reference values. The overall distribution of Scr reference values featured a pattern that the values were high in the south and low in the north, varying consistently with the latitude change. The data of the geographic factors in a given region allows the prediction of the Scr values in healthy adults. Analysis of these geographical factors can facilitate the determination of the reference values specific to a region to improve the accuracy for clinical diagnoses.
Are annual layers preserved in NorthGRIP Eemian ice?
NASA Astrophysics Data System (ADS)
Kettner, E.; Bigler, M.; Nielsen, M. E.; Steffensen, J. P.; Svensson, A.
2009-04-01
A newly developed setup for continuous flow analysis (CFA) of ice cores in Copenhagen is optimized for high resolution analysis of four components: Soluble sodium (mainly deriving from sea salt), soluble ammonium (related to biological processes and biomass burning events), insoluble dust particles (basically transported from Asian deserts to Greenland), and the electrolytic melt water conductivity (which is a bulk signal for all ionic constituents). Furthermore, we are for the first time implementing a flow cytometer to obtain high quality dust concentration and size distribution profiles based on individual dust particle measurements. Preliminary measurements show that the setup is able to resolve annual layers of 1 cm thickness. Ice flow models predict that annual layers in the Eemian section of the Greenland NorthGRIP ice core (130-115 ka BP) have a thickness of around 1 cm. However, the visual stratigraphy of the ice core indicates that the annual layering in the Eemian section may be disturbed by micro folds and rapid crystal growth. In this case study we will measure the impurity content of an Eemian segment of the NorthGRIP ice core with the new CFA setup. This will allow for a comparison to well-known impurity levels of the Holocene in both Greenland and Antarctic ice and we will attempt to determine if annual layers are still present in the ice.
Sakandé, Jean; Nikièma, Abdoulaye; Kabré, Elie; Sawadogo, Charles; Nacoulma, Eric W; Sanou, Mamadou; Sangaré, Lassana; Traoré-Ouédraogo, Rasmata; Sawadogo, Mamadou; Gershy-Damet, Guy Michel
2014-02-01
The National External Quality Assessment (NEQA) program of Burkina Faso is a proficiency testing program mandatory for all laboratories in the country since 2006. The program runs two cycles per year and covers all areas of laboratories. All panels were validated by the expert committee before dispatch under optimal storage and transport conditions to participating laboratories along with report forms. Performance in the last 5 years varied by panel, with average annual performance of bacteriology panels for all laboratories rising from 75% in 2006 to 81% in 2010 and with a best average performance of 87% in 2007 and 2008. During the same period, malaria microscopy performance varied from 85% to 94%, with a best average performance of 94% in 2010; chemistry performance increased from 87% to 94%, with a best average annual performance of 97% in 2009. Hematology showed more variation in performance, ranging from 61% to 86%, with a best annual average performance of 90% in 2008. Average annual performance for immunology varied less between 2006 and 2010, recording 97%, 90%, and 95%. Except for malaria microscopy, annual performances for enrolled panels varied substantially from year to year, indicating some difficulty in maintaining consistency in quality. The main challenges of the NEQA program observed between 2006 to 2010 were funding, sourcing, and safe transportation of quality panels to all laboratories countrywide.
Consedine, Nathan S
2012-08-01
Disparities in breast screening are well documented. Less clear are differences within groups of immigrant and non-immigrant minority women or differences in adherence to mammography guidelines over time. A sample of 1,364 immigrant and non-immigrant women (African American, English Caribbean, Haitian, Dominican, Eastern European, and European American) were recruited using a stratified cluster-sampling plan. In addition to measuring established predictors of screening, women reported mammography frequency in the last 10 years and were (per ACS guidelines at the time) categorized as never, sub-optimal (<1 screen/year), or adherent (1+ screens/year) screeners. Multinomial logistic regression showed that while ethnicity infrequently predicted the never versus sub-optimal comparison, English Caribbean, Haitian, and Eastern European women were less likely to screen systematically over time. Demographics did not predict the never versus sub-optimal distinction; only regular physician, annual exam, physician recommendation, and cancer worry showed effects. However, the adherent categorization was predicted by demographics, was less likely among women without insurance, a regular physician, or an annual exam, and more likely among women reporting certain patterns of emotion (low embarrassment and greater worry). Because regular screening is crucial to breast health, there is a clear need to consider patterns of screening among immigrant and non-immigrant women as well as whether the variables predicting the initiation of screening are distinct from those predicting systematic screening over time.
Control of Smart Building Using Advanced SCADA
NASA Astrophysics Data System (ADS)
Samuel, Vivin Thomas
For complete control of the building, a proper SCADA implementation and the optimization strategy has to be build. For better communication and efficiency a proper channel between the Communication protocol and SCADA has to be designed. This paper concentrate mainly between the communication protocol, and the SCADA implementation, for a better optimization and energy savings is derived to large scale industrial buildings. The communication channel used in order to completely control the building remotely from a distant place. For an efficient result we consider the temperature values and the power ratings of the equipment so that while controlling the equipment, we are setting a threshold values for FDD technique implementation. Building management system became a vital source for any building to maintain it and for safety purpose. Smart buildings, refers to various distinct features, where the complete automation system, office building controls, data center controls. ELC's are used to communicate the load values of the building to the remote server from a far location with the help of an Ethernet communication channel. Based on the demand fluctuation and the peak voltage, the loads operate differently increasing the consumption rate thus results in the increase in the annual consumption bill. In modern days, saving energy and reducing the consumption bill is most essential for any building for a better and long operation. The equipment - monitored regularly and optimization strategy is implemented for cost reduction automation system. Thus results in the reduction of annual cost reduction and load lifetime increase.
Cost avoidance associated with optimal stroke care in Canada.
Krueger, Hans; Lindsay, Patrice; Cote, Robert; Kapral, Moira K; Kaczorowski, Janusz; Hill, Michael D
2012-08-01
Evidence-based stroke care has been shown to improve patient outcomes and may reduce health system costs. Cost savings, however, are poorly quantified. This study assesses 4 aspects of stroke management (rapid assessment and treatment services, thrombolytic therapy, organized stroke units, and early home-supported discharge) and estimates the potential for cost avoidance in Canada if these services were provided in a comprehensive fashion. Several independent data sources, including the Canadian Institute of Health Information Discharge Abstract Database, the 2008-2009 National Stroke Audit, and the Acute Cerebrovascular Syndrome Registry in the province of British Columbia, were used to assess the current status of stroke care in Canada. Evidence from the literature was used to estimate the effect of providing optimal stroke care on rates of acute care hospitalization, length of stay in hospital, discharge disposition (including death), changes in quality of life, and costs avoided. Comprehensive and optimal stroke care in Canada would decrease the number of annual hospital episodes by 1062 (3.3%), the number of acute care days by 166 000 (25.9%), and the number of residential care days by 573 000 (12.8%). The number of deaths in the hospital would be reduced by 1061 (14.9%). Total avoidance of costs was estimated at $682 million annually ($307.4 million in direct costs, $374.3 million in indirect costs). The costs of stroke care in Canada can be substantially reduced, at the same time as improving patient outcomes, with the greater use of known effective treatment modalities.
Proceedings of the 3rd Annual Albert Institute for Bladder Cancer Research Symposium.
Flaig, Thomas W; Kamat, Ashish M; Hansel, Donna; Ingersoll, Molly A; Barton Grossman, H; Mendelsohn, Cathy; DeGraff, David; Liao, Joseph C; Taylor, John A
2017-07-27
The Third Annual Albert Institute Bladder Symposium was held on September 8-10th, 2016, in Denver Colorado. Participants discussed several critical topics in the field of bladder cancer: 1) Best practices for tissue analysis and use to optimize correlative studies, 2) Modeling bladder cancer to facilitate understanding and innovation, 3) Targeted therapies for bladder cancer, 4) Tumor phylogeny in bladder cancer, 5) New Innovations in bladder cancer diagnostics. Our understanding of and approach to treating urothelial carcinoma is undergoing rapid advancement. Preclinical models of bladder cancer have been leveraged to increase our basic and mechanistic understanding of the disease. With the approval of immune checkpoint inhibitors for the treatment of advanced urothelial carcinoma, the treatment approach for these patients has quickly changed. In this light, molecularly-defined subtypes of bladder cancer and appropriate pre-clinical models are now essential to the further advancement and appropriate application of these therapeutic improvements. The optimal collection and processing of clinical urothelial carcinoma tissues samples will also be critical in the development of predictive biomarkers for therapeutic selection. Technological advances in other areas including optimal imaging technologies and micro/nanotechnologies are being applied to bladder cancer, especially in the localized setting, and hold the potential for translational impact in the treatment of bladder cancer patients. Taken together, advances in several basic science and clinical areas are now converging in bladder cancer. These developments hold the promise of shaping and improving the clinical care of those with the disease.
Annualized TASAR Benefit Estimate for Alaska Airlines Operations
NASA Technical Reports Server (NTRS)
Henderson, Jeffrey
2015-01-01
The Traffic Aware Strategic Aircrew Request (TASAR) concept offers onboard automation for the purpose of advising the pilot of traffic compatible trajectory changes that would be beneficial to the flight. A fast-time simulation study was conducted to assess the benefits of TASAR to Alaska Airlines. The simulation compares historical trajectories without TASAR to trajectories developed with TASAR and evaluated by controllers against their objectives. It was estimated that between 8,000 and 12,000 gallons of fuel and 900 to 1,300 minutes could be saved annually per aircraft. These savings were applied fleet-wide to produce an estimated annual cost savings to Alaska Airlines in excess of $5 million due to fuel, maintenance, and depreciation cost savings. Switching to a more wind-optimal trajectory was found to be the use case that generated the highest benefits out of the three TASAR use cases analyzed. Alaska TASAR requests peaked at four to eight requests per hour in high-altitude Seattle center sectors south of Seattle-Tacoma airport.
Annualized TASAR Benefit Estimate for Virgin America Operations
NASA Technical Reports Server (NTRS)
Henderson, Jeffrey
2015-01-01
The Traffic Aware Strategic Aircrew Request (TASAR) concept offers onboard automation for the purpose of advising the pilot of traffic compatible trajectory changes that would be beneficial to the flight. A fast-time simulation study was conducted to assess the benefits of TASAR to Virgin America. The simulation compares historical trajectories without TASAR to trajectories developed with TASAR and evaluated by controllers against their objectives. It was estimated that about 25,000 gallons of fuel and about 2,500 minutes could be saved annually per aircraft. These savings were applied fleet-wide to produce an estimated annual cost savings to Virgin America in excess of $5 million due to fuel, maintenance, and depreciation cost savings. Switching to a more wind-optimal trajectory was found to be the use case that generated the highest benefits out of the three TASAR use cases analyzed. Virgin America TASAR requests peaked at two to four requests per hour per sector in high-altitude Oakland and Salt Lake City center sectors east of San Francisco.
Annualized TASAR Benefits for Virgin America Operations
NASA Technical Reports Server (NTRS)
2014-01-01
The Traffic Aware Strategic Aircrew Request (TASAR) concept offers onboard automation for the purpose of advising the pilot of traffic compatible trajectory changes that would be beneficial to the flight. A fast-time simulation study was conducted to assess the benefits of TASAR to Virgin America. The simulation compares historical trajectories without TASAR to trajectories developed with TASAR and evaluated by controllers against their objectives. It was estimated that about 25,000 gallons of fuel and about 2,500 minutes could be saved annually per aircraft. These savings were applied fleet-wide to produce an estimated annual cost savings to Virgin America in excess of $5 million due to fuel, maintenance, and depreciation cost savings. Switching to a more wind-optimal trajectory was found to be the use case that generated the highest benefits out of the three TASAR use cases analyzed. Virgin America TASAR requests peaked at two to four requests per hour per sector in high-altitude Oakland and Salt Lake City center sectors east of San Francisco.
NASA Technical Reports Server (NTRS)
Andrews, J.
1977-01-01
An optimal decision model of crop production, trade, and storage was developed for use in estimating the economic consequences of improved forecasts and estimates of worldwide crop production. The model extends earlier distribution benefits models to include production effects as well. Application to improved information systems meeting the goals set in the large area crop inventory experiment (LACIE) indicates annual benefits to the United States of $200 to $250 million for wheat, $50 to $100 million for corn, and $6 to $11 million for soybeans, using conservative assumptions on expected LANDSAT system performance.
ISSN Exercise & Sport Nutrition Review: Research & Recommendations
Kreider, Richard B; Almada, Anthony L; Antonio, Jose; Broeder, Craig; Earnest, Conrad; Greenwood, Mike; Incledon, Thomas; Kalman, Douglas S; Kleiner, Susan M; Leutholtz, Brian; Lowery, Lonnie M; Mendel, Ron; Stout, Jeffrey R; Willoughby, Darryn S; Ziegenfuss, Tim N
2004-01-01
Sport nutrition is a constantly evolving field with literally thousands of research papers published annually. For this reason, keeping up to date with the literature is often difficult. This paper presents a well-referenced overview of the current state of the science related to how to optimize training through nutrition. More specifically, this article discusses: 1.) how to evaluate the scientific merit of nutritional supplements; 2.) general nutritional strategies to optimize performance and enhance recovery; and, 3.) our current understanding of the available science behind weight gain, weight loss, and performance enhancement supplements. Our hope is that ISSN members find this review useful in their daily practice and consultation with their clients.
[PRIORITY TECHNOLOGIES OF THE MEDICAL WASTE DISPOSAL SYSTEM].
Samutin, N M; Butorina, N N; Starodubova, N Yu; Korneychuk, S S; Ustinov, A K
2015-01-01
The annual production of waste in health care institutions (HCI) tends to increase because of the growth of health care provision for population. Among the many criteria for selecting the optimal treatment technologies HCI is important to provide epidemiological and chemical safety of the final products. Environmentally friendly method of thermal disinfection of medical waste may be sterilizators of medical wastes intended for hospitals, medical centers, laboratories and other health care facilities that have small and medium volume of processing of all types of waste Class B and C. The most optimal method of centralized disposal of medical waste is a thermal processing method of the collected material.
Immunogold Nanoparticles for Rapid Plasmonic Detection of C. sakazakii.
Aly, Mohamed A; Domig, Konrad J; Kneifel, Wolfgang; Reimhult, Erik
2018-06-25
Cronobacter sakazakii is a foodborne pathogen that can cause a rare, septicemia, life-threatening meningitis, and necrotizing enterocolitis in infants. In general, standard methods for pathogen detection rely on culture, plating, colony counting and polymerase chain reaction DNA-sequencing for identification, which are time, equipment and skill demanding. Recently, nanoparticle- and surface-based immunoassays have increasingly been explored for pathogen detection. We investigate the functionalization of gold nanoparticles optimized for irreversible and specific binding to C. sakazakii and their use for spectroscopic detection of the pathogen. We demonstrate how 40-nm gold nanoparticles grafted with a poly(ethylene glycol) brush and functionalized with polyclonal antibodies raised against C. sakazakii can be used to specifically target C. sakazakii . The strong extinction peak of the Au nanoparticle plasmon polariton resonance in the optical range is used as a label for detection of the pathogens. Individual binding of the nanoparticles to the C. sakazakii surface is also verified by transmission electron microscopy. We show that a high degree of surface functionalization with anti- C. sakazakii optimizes the detection and leads to a detection limit as low as 10 CFU/mL within 2 h using a simple cuvette-based UV-Vis spectrometric readout that has great potential for further optimization.
Zhou, Fuqiang; Su, Zhen; Chai, Xinghua; Chen, Lipeng
2014-01-01
This paper proposes a new method to detect and identify foreign matter mixed in a plastic bottle filled with transfusion solution. A spin-stop mechanism and mixed illumination style are applied to obtain high contrast images between moving foreign matter and a static transfusion background. The Gaussian mixture model is used to model the complex background of the transfusion image and to extract moving objects. A set of features of moving objects are extracted and selected by the ReliefF algorithm, and optimal feature vectors are fed into the back propagation (BP) neural network to distinguish between foreign matter and bubbles. The mind evolutionary algorithm (MEA) is applied to optimize the connection weights and thresholds of the BP neural network to obtain a higher classification accuracy and faster convergence rate. Experimental results show that the proposed method can effectively detect visible foreign matter in 250-mL transfusion bottles. The misdetection rate and false alarm rate are low, and the detection accuracy and detection speed are satisfactory. PMID:25347581
A further contribution to the seasonal variation of weighted mean temperature
NASA Astrophysics Data System (ADS)
Ding, Maohua; Hu, Wusheng
2017-12-01
The weighted mean temperature Tm is a variable parameter in the Global Navigation Satellite System (GNSS) meteorology and the Askne-Nordius zenith wet delay (ZWD) model. Some parameters about the Tm seasonal variation (e.g. the annual mean value, the annual range, the annual and semi-annual amplitudes, and the long-term trend) were discussed before. In this study, some additional results about the Tm seasonal variation on a global scale were found by using the Tm time series at 309 global radiosonde sites. Periodic signals of the annual and semi-annual variations were detected in these Tm time series by using the Lomb-Scargle periodogram. The annual variation is the main component of the periodic Tm in non-tropical regions, while the annual variation or the semiannual variation can be the main component of the periodic Tm in tropics. The mean annual Tm almost keeps constant with the increasing latitude in tropics, while it decreases with the increasing latitude in non-tropical regions. From a global perspective, Tm has an increasing trend of 0.22 K/decade on average, which may be caused by the global warming effects. The annual phase is almost found in about January for the non-tropical regions of the Southern Hemisphere and in about July for the non-tropical regions of the Northern Hemisphere, but it has no clear symmetry in tropics. Unlike the annual phase, the geographical distributions of semi-annual phase do not follow obvious rules. In non-tropical regions, the maximum and minimum Tm of the seasonal model are usually found in respective summer and winter days while the maximum and minimum Tm are distributed over a whole year but not in any fixed seasons for tropical regions. The seasonal model errors increase with the increasing value of annual amplitude. A primary reason for the irregular seasonal variation in tropics is that Tm has rather small variations in this region.
Optimal frequency domain textural edge detection filter
NASA Technical Reports Server (NTRS)
Townsend, J. K.; Shanmugan, K. S.; Frost, V. S.
1985-01-01
An optimal frequency domain textural edge detection filter is developed and its performance evaluated. For the given model and filter bandwidth, the filter maximizes the amount of output image energy placed within a specified resolution interval centered on the textural edge. Filter derivation is based on relating textural edge detection to tonal edge detection via the complex low-pass equivalent representation of narrowband bandpass signals and systems. The filter is specified in terms of the prolate spheroidal wave functions translated in frequency. Performance is evaluated using the asymptotic approximation version of the filter. This evaluation demonstrates satisfactory filter performance for ideal and nonideal textures. In addition, the filter can be adjusted to detect textural edges in noisy images at the expense of edge resolution.
Levy, Matthew E; Phillips, Gregory; Magnus, Manya; Kuo, Irene; Beauchamp, Geetha; Emel, Lynda; Hucks-Ortiz, Christopher; Hamilton, Erica L; Wilton, Leo; Chen, Iris; Mannheimer, Sharon; Tieu, Hong-Van; Scott, Hyman; Fields, Sheldon D; Del Rio, Carlos; Shoptaw, Steven; Mayer, Kenneth
2017-10-01
Little is known about HIV treatment optimism and risk behaviors among Black men who have sex with men (BMSM). Using longitudinal data from BMSM in the HPTN 061 study, we examined participants' self-reported comfort with having condomless sex due to optimistic beliefs regarding HIV treatment. We assessed correlates of treatment optimism and its association with subsequent risk behaviors for HIV acquisition or transmission using multivariable logistic regression with generalized estimating equations. Independent correlates of treatment optimism included age ≥35 years, annual household income <$20,000, depressive symptoms, high HIV conspiracy beliefs, problematic alcohol use, and previous HIV diagnosis. Treatment optimism was independently associated with subsequent condomless anal sex with a male partner of serodiscordant/unknown HIV status among HIV-infected men, but this association was not statistically significant among HIV-uninfected men. HIV providers should engage men in counseling conversations to assess and minimize willingness to have condomless sex that is rooted in optimistic treatment beliefs without knowledge of viral suppression.
Hughes, Ruth C E; Florkowski, Chris; Gullam, Joanna E
2017-11-17
Recent New Zealand guidelines recommend annual glycated haemoglobin (HbA1c) measurements from three months postpartum, replacing the glucose tolerance test (GTT) at six weeks, to screen for persistent hyperglycaemia following gestational diabetes. Data suggest that this screening approach may miss cases of type 2 diabetes, but are they detected at subsequent screening and will screening rates improve? Our aim was to evaluate the effectiveness of HbA1c monitoring in improving screening rates following gestational diabetes and in detecting postpartum hyperglycaemia. During 2015 in Christchurch, all women with gestational diabetes were offered HbA1c and GTT measurements at three months postpartum and subsequent annual HbA1c measurements were recommended. Data from electronic hospital records were collected for a minimum 18 months postpartum. Of the cohort of 333 women, 218 (65%) completed both HbA1c and GTT at three months postpartum, 74 (22%) HbA1c only, 16 (5%) GTT only, 25 (8%) no screening; 184 (55%) had subsequent HbA1c tests. Diabetes was detected by GTT in five (2%) women and by HbA1c in only one out of five (20%); the disagreement between tests resolved in three out of four (75%) women with subsequent testing. Prediabetes was detected by GTT in 30 (14%) women; however, HbA1c only detected five out of 30 (17%) and subsequent HbA1c testing identified a further two out of 30 with prediabetes. HbA1c measurement at three months postpartum had a good uptake. However, most cases of diabetes were identified by subsequent HbA1c testing, the uptake of which was suboptimal. The importance of annual HbA1c monitoring following gestational diabetes needs greater emphasis. © 2017 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
NASA Astrophysics Data System (ADS)
Suhaila, Jamaludin; Yusop, Zulkifli
2017-06-01
Most of the trend analysis that has been conducted has not considered the existence of a change point in the time series analysis. If these occurred, then the trend analysis will not be able to detect an obvious increasing or decreasing trend over certain parts of the time series. Furthermore, the lack of discussion on the possible factors that influenced either the decreasing or the increasing trend in the series needs to be addressed in any trend analysis. Hence, this study proposes to investigate the trends, and change point detection of mean, maximum and minimum temperature series, both annually and seasonally in Peninsular Malaysia and determine the possible factors that could contribute to the significance trends. In this study, Pettitt and sequential Mann-Kendall (SQ-MK) tests were used to examine the occurrence of any abrupt climate changes in the independent series. The analyses of the abrupt changes in temperature series suggested that most of the change points in Peninsular Malaysia were detected during the years 1996, 1997 and 1998. These detection points captured by Pettitt and SQ-MK tests are possibly related to climatic factors, such as El Niño and La Niña events. The findings also showed that the majority of the significant change points that exist in the series are related to the significant trend of the stations. Significant increasing trends of annual and seasonal mean, maximum and minimum temperatures in Peninsular Malaysia were found with a range of 2-5 °C/100 years during the last 32 years. It was observed that the magnitudes of the increasing trend in minimum temperatures were larger than the maximum temperatures for most of the studied stations, particularly at the urban stations. These increases are suspected to be linked with the effect of urban heat island other than El Niño event.
NASA Astrophysics Data System (ADS)
Plant, Joshua N.; Johnson, Kenneth S.; Sakamoto, Carole M.; Jannasch, Hans W.; Coletti, Luke J.; Riser, Stephen C.; Swift, Dana D.
2016-06-01
Six profiling floats equipped with nitrate and oxygen sensors were deployed at Ocean Station P in the Gulf of Alaska. The resulting six calendar years and 10 float years of nitrate and oxygen data were used to determine an average annual cycle for net community production (NCP) in the top 35 m of the water column. NCP became positive in February as soon as the mixing activity in the surface layer began to weaken, but nearly 3 months before the traditionally defined mixed layer began to shoal from its winter time maximum. NCP displayed two maxima, one toward the end of May and another in August with a summertime minimum in June corresponding to the historical peak in mesozooplankton biomass. The average annual NCP was determined to be 1.5 ± 0.6 mol C m-2 yr-1 using nitrate and 1.5 ± 0.7 mol C m-2 yr-1 using oxygen. The results from oxygen data proved to be quite sensitive to the gas exchange model used as well as the accuracy of the oxygen measurement. Gas exchange models optimized for carbon dioxide flux generally ignore transport due to gas exchange through the injection of bubbles, and these models yield NCP values that are two to three time higher than the nitrate-based estimates. If nitrate and oxygen NCP rates are assumed to be related by the Redfield model, we show that the oxygen gas exchange model can be optimized by tuning the exchange terms to reproduce the nitrate NCP annual cycle.
Infiltration in layered loessial deposits: Revised numerical simulations and recharge assessment
NASA Astrophysics Data System (ADS)
Dafny, Elad; Šimůnek, Jirka
2016-07-01
The objective of this study is to assess recharge rates and their timing under layered loessial deposits at the edge of arid zones. Particularly, this study is focused on the case of the coastal plain of Israel and Gaza. First, results of a large-scale field infiltration test were used to calibrate the van Genuchten parameters of hydraulic properties of the loessial sediments using HYDRUS (2D/3D). Second, optimized soil hydraulic parameters were used by HYDRUS-1D to simulate the water balance of the sandy-loess sediments during a 25-year period (1990-2015) for three environmental conditions: bare soil, and soil with both sparse and dense natural vegetation. The best inverse parameter optimization run fitted the infiltration test data with the RMSE of 0.27 d (with respect to a moisture front arrival) and R2 of 96%. The calibrated model indicates that hydraulic conductivities of the two soil horizons, namely sandy loam and sandy clay loam, are 81 cm/d and 17.5 cm/d, respectively. These values are significantly lower than those previously reported, based on numerical simulations, for the same site. HYDRUS-1D simulation of natural recharge under bare soil resulted in recharge estimates (to the aquifer) in the range of 21-93 mm/yr, with an average recharge of 63 mm/yr. Annual precipitation in the same period varied between 100 and 300 mm/yr, with an average of 185 mm/yr. For semi-stabilized dunes, with 26% of the soil surface covered by local shrub (Artemisia monosperma), the mean annual recharge was 28 mm. For the stabilized landscape, with as much as 50% vegetation coverage, it was only 2-3 mm/yr. In other words, loessial sediments can either be a source of significant recharge, or of no recharge at all, depending on the degree of vegetative cover. Additionally, the time lag between specific rainy seasons and corresponding recharge events at a depth of 22 m, increased from 2.5 to 5 years, and to about 20 years, respectively, with an increasing vegetative cover. For this reason, and also likely due to a great depth of loessial sediments, no correlation was found between annual recharge and annual precipitations of the same year or subsequent years. Similarly, no differences were found between summer and winter recharge fluxes. Instead, numerical simulations indicated continuous year-round recharge of the aquifer. We conclude that the layered subsurface acts as a short-term (annual) and long-term (multi-annual) buffer to smooth sudden precipitation/infiltration events. Vegetation conditions can help in predicting long-term recharge rates (as percentage of annual precipitation), which in turn need to be considered when assigning recharge characteristics in regional assessments and models.
Uniformed Services University of the Health Sciences Journal. 2004/5 Edition
2005-10-30
Goal 5: STEWARDSHIP: We will protect and enhance the human and physical resources of the University, optimize productivity , promote a...Other OSD- Recognized, Significant Areas of Support and Products Are Provided by USU for the MHS ...... 46-47 - Clinical Support for the Military...Comprehensive Annual Faculty Listing Report ......................................... 51-52 Two Significant OSD Awards Recognize the Multiple Products of USU
ERIC Educational Resources Information Center
Savage, James G., Ed.; Wedemeyer, Dan J., Ed.
This two-volume set contains 165 papers presented at a conference that brought together over 1,100 telecommunications leaders and leading commentators from over 40 countries across the Pacific region. The papers indicate that the optimism of the telecommunications industry is possibly greater than during the 1980s, although tempered by a more…
Soil Fungal Resources in Annual Cropping Systems and Their Potential for Management
Esmaeili Taheri, Ahmad; Bainard, Luke D.; Yang, Chao; Navarro-Borrell, Adriana; Hamel, Chantal
2014-01-01
Soil fungi are a critical component of agroecosystems and provide ecological services that impact the production of food and bioproducts. Effective management of fungal resources is essential to optimize the productivity and sustainability of agricultural ecosystems. In this review, we (i) highlight the functional groups of fungi that play key roles in agricultural ecosystems, (ii) examine the influence of agronomic practices on these fungi, and (iii) propose ways to improve the management and contribution of soil fungi to annual cropping systems. Many of these key soil fungal organisms (i.e., arbuscular mycorrhizal fungi and fungal root endophytes) interact directly with plants and are determinants of the efficiency of agroecosystems. In turn, plants largely control rhizosphere fungi through the production of carbon and energy rich compounds and of bioactive phytochemicals, making them a powerful tool for the management of soil fungal diversity in agriculture. The use of crop rotations and selection of optimal plant genotypes can be used to improve soil biodiversity and promote beneficial soil fungi. In addition, other agronomic practices (e.g., no-till, microbial inoculants, and biochemical amendments) can be used to enhance the effect of beneficial fungi and increase the health and productivity of cultivated soils. PMID:25247177
Solar tower power plant using a particle-heated steam generator: Modeling and parametric study
NASA Astrophysics Data System (ADS)
Krüger, Michael; Bartsch, Philipp; Pointner, Harald; Zunft, Stefan
2016-05-01
Within the framework of the project HiTExStor II, a system model for the entire power plant consisting of volumetric air receiver, air-sand heat exchanger, sand storage system, steam generator and water-steam cycle was implemented in software "Ebsilon Professional". As a steam generator, the two technologies fluidized bed cooler and moving bed heat exchangers were considered. Physical models for the non-conventional power plant components as air- sand heat exchanger, fluidized bed coolers and moving bed heat exchanger had to be created and implemented in the simulation environment. Using the simulation model for the power plant, the individual components and subassemblies have been designed and the operating parameters were optimized in extensive parametric studies in terms of the essential degrees of freedom. The annual net electricity output for different systems was determined in annual performance calculations at a selected location (Huelva, Spain) using the optimized values for the studied parameters. The solution with moderate regenerative feed water heating has been found the most advantageous. Furthermore, the system with moving bed heat exchanger prevails over the system with fluidized bed cooler due to a 6 % higher net electricity yield.
Mechanisms of Cancer - Annual Plan
NCI works to understand the mechanisms of cancer cell growth, survival, and metastasis. Get more information on how NCI supports basic scientific research that will lead to new ways to prevent, detect, and treat cancer.
Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-01-01
Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290
Joint optimization of fluence field modulation and regularization in task-driven computed tomography
NASA Astrophysics Data System (ADS)
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-03-01
Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Applications of Elpasolites as a Multimode Radiation Sensor
NASA Astrophysics Data System (ADS)
Guckes, Amber
This study consists of both computational and experimental investigations. The computational results enabled detector design selections and confirmed experimental results. The experimental results determined that the CLYC scintillation detector can be applied as a functional and field-deployable multimode radiation sensor. The computational study utilized MCNP6 code to investigate the response of CLYC to various incident radiations and to determine the feasibility of its application as a handheld multimode sensor and as a single-scintillator collimated directional detection system. These simulations include: • Characterization of the response of the CLYC scintillator to gamma-rays and neutrons; • Study of the isotopic enrichment of 7Li versus 6Li in the CLYC for optimal detection of both thermal neutrons and fast neutrons; • Analysis of collimator designs to determine the optimal collimator for the single CLYC sensor directional detection system to assay gamma rays and neutrons; Simulations of a handheld CLYC multimode sensor and a single CLYC scintillator collimated directional detection system with the optimized collimator to determine the feasibility of detecting nuclear materials that could be encountered during field operations. These nuclear materials include depleted uranium, natural uranium, low-enriched uranium, highly-enriched uranium, reactor-grade plutonium, and weapons-grade plutonium. The experimental study includes the design, construction, and testing of both a handheld CLYC multimode sensor and a single CLYC scintillator collimated directional detection system. Both were designed in the Inventor CAD software and based on results of the computational study to optimize its performance. The handheld CLYC multimode sensor is modular, scalable, low?power, and optimized for high count rates. Commercial?off?the?shelf components were used where possible in order to optimize size, increase robustness, and minimize cost. The handheld CLYC multimode sensor was successfully tested to confirm its ability for gamma-ray and neutron detection, and gamma?ray and neutron spectroscopy. The sensor utilizes wireless data transfer for possible radiation mapping and network?centric deployment. The handheld multimode sensor was tested by performing laboratory measurements with various gamma-ray sources and neutron sources. The single CLYC scintillator collimated directional detection system is portable, robust, and capable of source localization and identification. The collimator was designed based on the results of the computational study and is constructed with high density polyethylene (HDPE) and lead (Pb). The collimator design and construction allows for the directional detection of gamma rays and fast neutrons utilizing only one scintillator which is interchangeable. For this study, a CLYC-7 scintillator was used. The collimated directional detection system was tested by performing laboratory directional measurements with various gamma-ray sources, 252Cf and a 239PuBe source.
NASA Astrophysics Data System (ADS)
Altimir, Nuria; Ibañez, Mercedes; Elbers, Jan; Rota, Cristina; Arias, Claudia; Carrara, Arnaud; Nogues, Salvador; Sebastia, Maria-Teresa
2013-04-01
The net ecosystem exchange (NEE) and the annual C balance of a site are in general modulated by light, temperature and availability of water and other resources to the plants. In grasslands, NEE is expected to depend strongly on the vegetation with a relationship that can be summarized by the above-ground biomass, its amount and dynamics. Any factor controlling the amount of green biomass is expected to have a strong impact on the short-term NEE, such as amount of solar radiation, water availability and grazing pressure. These controls are modulated differently depending on the plant functional type enduring them. Furthermore, as different guilds follow different functional strategies for optimization of the resources, they also present different patterns of change in their capacities such as photosynthetic fixation, belowground C allocation, and C loss via respiration. We examined these relationships at several semi-natural pastures to determine how the seasonal distribution of plant functional types is detected in the short-term ecosystem exchange and what role it plays. We have looked into these patterns to determine the general variation of key processes and whether different temporal patterns arise between different guilds. The study sites are in the Pyrenees, on the mountain pastures of La Bertolina, Alinyà, and Castellar at 1300, 1700, 1900 m a.s.l. respectively. We performed ecosystem-scale flux measurements by means of micrometeorologial stations combined with a thorough description of the vegetation including below- and above-ground biomass and leaf area as well as monitoring of natural abundance of C isotopes, discriminated by plant functional types. We present here the results of the study.
Optimal design of zero-water discharge rinsing systems.
Thöming, Jorg
2002-03-01
This paper is about zero liquid discharge in processes that use water for rinsing. Emphasis was given to those systems that contaminate process water with valuable process liquor and compounds. The approach involved the synthesis of optimal rinsing and recycling networks (RRN) that had a priori excluded water discharge. The total annualized costs of the RRN were minimized by the use of a mixed-integer nonlinear program (MINLP). This MINLP was based on a hyperstructure of the RRN and contained eight counterflow rinsing stages and three regenerator units: electrodialysis, reverse osmosis, and ion exchange columns. A "large-scale nickel plating process" case study showed that by means of zero-water discharge and optimized rinsing the total waste could be reduced by 90.4% at a revenue of $448,000/yr. Furthermore, with the optimized RRN, the rinsing performance can be improved significantly at a low-cost increase. In all the cases, the amount of valuable compounds reclaimed was above 99%.
Integrated strategic and tactical biomass-biofuel supply chain optimization.
Lin, Tao; Rodríguez, Luis F; Shastri, Yogendra N; Hansen, Alan C; Ting, K C
2014-03-01
To ensure effective biomass feedstock provision for large-scale biofuel production, an integrated biomass supply chain optimization model was developed to minimize annual biomass-ethanol production costs by optimizing both strategic and tactical planning decisions simultaneously. The mixed integer linear programming model optimizes the activities range from biomass harvesting, packing, in-field transportation, stacking, transportation, preprocessing, and storage, to ethanol production and distribution. The numbers, locations, and capacities of facilities as well as biomass and ethanol distribution patterns are key strategic decisions; while biomass production, delivery, and operating schedules and inventory monitoring are key tactical decisions. The model was implemented to study Miscanthus-ethanol supply chain in Illinois. The base case results showed unit Miscanthus-ethanol production costs were $0.72L(-1) of ethanol. Biorefinery related costs accounts for 62% of the total costs, followed by biomass procurement costs. Sensitivity analysis showed that a 50% reduction in biomass yield would increase unit production costs by 11%. Copyright © 2014 Elsevier Ltd. All rights reserved.
Zhang, Chu; Feng, Xuping; Wang, Jian; Liu, Fei; He, Yong; Zhou, Weijun
2017-01-01
Detection of plant diseases in a fast and simple way is crucial for timely disease control. Conventionally, plant diseases are accurately identified by DNA, RNA or serology based methods which are time consuming, complex and expensive. Mid-infrared spectroscopy is a promising technique that simplifies the detection procedure for the disease. Mid-infrared spectroscopy was used to identify the spectral differences between healthy and infected oilseed rape leaves. Two different sample sets from two experiments were used to explore and validate the feasibility of using mid-infrared spectroscopy in detecting Sclerotinia stem rot (SSR) on oilseed rape leaves. The average mid-infrared spectra showed differences between healthy and infected leaves, and the differences varied among different sample sets. Optimal wavenumbers for the 2 sample sets selected by the second derivative spectra were similar, indicating the efficacy of selecting optimal wavenumbers. Chemometric methods were further used to quantitatively detect the oilseed rape leaves infected by SSR, including the partial least squares-discriminant analysis, support vector machine and extreme learning machine. The discriminant models using the full spectra and the optimal wavenumbers of the 2 sample sets were effective for classification accuracies over 80%. The discriminant results for the 2 sample sets varied due to variations in the samples. The use of two sample sets proved and validated the feasibility of using mid-infrared spectroscopy and chemometric methods for detecting SSR on oilseed rape leaves. The similarities among the selected optimal wavenumbers in different sample sets made it feasible to simplify the models and build practical models. Mid-infrared spectroscopy is a reliable and promising technique for SSR control. This study helps in developing practical application of using mid-infrared spectroscopy combined with chemometrics to detect plant disease.
Spatial analysis of the annual and seasonal aridity trends in Extremadura, southwestern Spain
NASA Astrophysics Data System (ADS)
Moral, Francisco J.; Paniagua, Luis L.; Rebollo, Francisco J.; García-Martín, Abelardo
2017-11-01
The knowledge of drought (or wetness) conditions is necessary not only for a rational use of water resources but also for explaining landscape and ecology characteristics. An increase in aridity in many areas of the world is expected because of climate change (global warming). With the aim of analysing annual and seasonal aridity trends in Extremadura, southwestern Spain, climate data from 81 locations within the 1951-2010 period were used. After computing the De Martonne aridity index at each location, a geographic information system (GIS) and multivariate geostatistics (regression kriging) were utilised to map this index throughout the region. Later, temporal trends were analysed using the Mann-Kendall test, and the Sen's estimator was utilised to estimate the magnitude of trends. Maps of aridity trends were generated by ordinary kriging algorithm, providing a visualisation of detected annual and seasonal tendencies. An increase in aridity, as the De Martonne aridity index decreased, was apparent during the study period, mainly in the more humid locations of the north of the region. An increase of the seasonal De Martonne aridity index was also found, but it was only statistically significant in some locations in spring and summer, with the highest decreasing rate in the north of Extremadura. Change year detection was achieved using cumulative sum graphs, obtaining that firstly the change point occurred in spring, in the mid-1970s, later in the annual period in the late 1970s and finally in summer at the end of the 1980s.
Zorko, Benjamin; Korun, Matjaž; Mora Canadas, Juan Carlos; Nicoulaud-Gouin, Valerie; Chyly, Pavol; Blixt Buhr, Anna Maria; Lager, Charlotte; Aquilonius, Karin; Krajewski, Pawel
2016-07-01
Several methods for reporting outcomes of gamma-ray spectrometric measurements of environmental samples for dose calculations are presented and discussed. The measurement outcomes can be reported as primary measurement results, primary measurement results modified according to the quantification limit, best estimates obtained by the Bayesian posterior (ISO 11929), best estimates obtained by the probability density distribution resembling shifting, and the procedure recommended by the European Commission (EC). The annual dose is calculated from the arithmetic average using any of these five procedures. It was shown that the primary measurement results modified according to the quantification limit could lead to an underestimation of the annual dose. On the other hand the best estimates lead to an overestimation of the annual dose. The annual doses calculated from the measurement outcomes obtained according to the EC's recommended procedure, which does not cope with the uncertainties, fluctuate between an under- and overestimation, depending on the frequency of the measurement results that are larger than the limit of detection. In the extreme case, when no measurement results above the detection limit occur, the average over primary measurement results modified according to the quantification limit underestimates the average over primary measurement results for about 80%. The average over best estimates calculated according the procedure resembling shifting overestimates the average over primary measurement results for 35%, the average obtained by the Bayesian posterior for 85% and the treatment according to the EC recommendation for 89%. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rivas, Andrés L.; Pisoni, Juan Pablo
2010-01-01
The location and seasonal variability of surface thermal fronts along the Argentinean Continental Shelf (38-55°S) were studied using 18 years (1985-2002) of sea surface temperature (SST) satellite data. Monthly SST gradients were calculated and a threshold was used to identify frontal pixels. Frontal areas were classified into 4 zones according to their seasonal evolution and the main forcings leading to the front's formation were identified for each group. The shelf break front was easily detected due to the large number of frontal pixels in the region and its high mean gradient values. This front showed a marked annual cycle and relatively constant position associated to the bottom slope; it tended to be located where the core of the Malvinas current is closest to the shelf. Tidal fronts also showed a strong annual cycle, being detected in three well-defined regions during spring and summer. Along the coasts of Tierra del Fuego and Santa Cruz, the combination of strong tidal mixing and low-salinity coastal plumes led to semi-annual seasonal cycles of frontal intensity and persistence that showed a relative maximum in winter. A similar behavior (semi-annual) was found at the coast off the Buenos Aires Province. There, the coastal dilution and the bathymetric gradient generated near-coastal fronts that changed direction seasonally. In the northern mid-shelf, a front linked to the intrusion of warm waters formed in the San Matías Gulf was identified during the winter.
Paleoclimatological study using stalagmites from Java Island, Indonesia
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Matsuoka, H.; Ohsawa, S.; Yamada, M.; Kitaoka, K.; Kiguchi, M.; Ueda, J.; Yoshimura, K.; Kurisaki, K.; Nakai, S.; Brahmantyo, B.; Maryunani, K. A.; Tagami, T.; Takemura, K.; Yoden, S.
2006-12-01
In the last decade, decoding geochemical records in stalagmites has been widely recognized as a powerful tool for the elucidation of paleoclimate/environment of the terrestrial areas. The previous data are mainly reported from areas that are located in middle latitude. However, this study aims at reconstructing past climate variations in the Asian equatorial regions by using oxygen isotopes and other geochemical proxies recorded in Indonesian stalagmites.. Especially, we focus on the detection of the precipitation anomaly that reflects the El Niño Southern Oscillation (ENSO). We performed geological surveys in Buniayu limestone caves, Sukabumi, West Java, and Karangbolong, Central Java, Indonesia and collected a series of stalagmites/stalactites and drip water samples. Detailed textures of stalagmite samples were observed using thin sections to identify "annual" bandings. Moreover, we also measured both (1) annual luminescent banding that can be viewed by ultraviolet-light stimulation and (2) uranium series disequilibrium ages using the MC-ICP-MS for each stalagmite to construct the age model. We also carried out 3H-3He dating and stable isotope measurements of drip water samples to understand hydrogeology in study areas. Based on these frameworks, oxygen isotopes and other geochemical proxies will be analyzed for annual or sub-annual time scales. The proxy data will then be compared with meteorological data set, such as local precipitation, in the past 50 years. Finally, we will reconstruct for longer timescales the past climate, particularly the precipitation anomaly, in the region to detect ancient ENSO.
Comparison and optimization of radar-based hail detection algorithms in Slovenia
NASA Astrophysics Data System (ADS)
Stržinar, Gregor; Skok, Gregor
2018-05-01
Four commonly used radar-based hail detection algorithms are evaluated and optimized in Slovenia. The algorithms are verified against ground observations of hail at manned stations in the period between May and August, from 2002 to 2010. The algorithms are optimized by determining the optimal values of all possible algorithm parameters. A number of different contingency-table-based scores are evaluated with a combination of Critical Success Index and frequency bias proving to be the best choice for optimization. The best performance indexes are given by Waldvogel and the severe hail index, followed by vertically integrated liquid and maximum radar reflectivity. Using the optimal parameter values, a hail frequency climatology map for the whole of Slovenia is produced. The analysis shows that there is a considerable variability of hail occurrence within the Republic of Slovenia. The hail frequency ranges from almost 0 to 1.7 hail days per year with an average value of about 0.7 hail days per year.
Label-Free Toxin Detection by Means of Time-Resolved Electrochemical Impedance Spectroscopy
Chai, Changhoon; Takhistov, Paul
2010-01-01
The real-time detection of trace concentrations of biological toxins requires significant improvement of the detection methods from those reported in the literature. To develop a highly sensitive and selective detection device it is necessary to determine the optimal measuring conditions for the electrochemical sensor in three domains: time, frequency and polarization potential. In this work we utilized a time-resolved electrochemical impedance spectroscopy for the detection of trace concentrations of Staphylococcus enterotoxin B (SEB). An anti-SEB antibody has been attached to the nano-porous aluminum surface using 3-aminopropyltriethoxysilane/glutaraldehyde coupling system. This immobilization method allows fabrication of a highly reproducible and stable sensing device. Using developed immobilization procedure and optimized detection regime, it is possible to determine the presence of SEB at the levels as low as 10 pg/mL in 15 minutes. PMID:22315560
A Hybrid Approach for CpG Island Detection in the Human Genome.
Yang, Cheng-Hong; Lin, Yu-Da; Chiang, Yi-Cheng; Chuang, Li-Yeh
2016-01-01
CpG islands have been demonstrated to influence local chromatin structures and simplify the regulation of gene activity. However, the accurate and rapid determination of CpG islands for whole DNA sequences remains experimentally and computationally challenging. A novel procedure is proposed to detect CpG islands by combining clustering technology with the sliding-window method (PSO-based). Clustering technology is used to detect the locations of all possible CpG islands and process the data, thus effectively obviating the need for the extensive and unnecessary processing of DNA fragments, and thus improving the efficiency of sliding-window based particle swarm optimization (PSO) search. This proposed approach, named ClusterPSO, provides versatile and highly-sensitive detection of CpG islands in the human genome. In addition, the detection efficiency of ClusterPSO is compared with eight CpG island detection methods in the human genome. Comparison of the detection efficiency for the CpG islands in human genome, including sensitivity, specificity, accuracy, performance coefficient (PC), and correlation coefficient (CC), ClusterPSO revealed superior detection ability among all of the test methods. Moreover, the combination of clustering technology and PSO method can successfully overcome their respective drawbacks while maintaining their advantages. Thus, clustering technology could be hybridized with the optimization algorithm method to optimize CpG island detection. The prediction accuracy of ClusterPSO was quite high, indicating the combination of CpGcluster and PSO has several advantages over CpGcluster and PSO alone. In addition, ClusterPSO significantly reduced implementation time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Economopoulou, M.A.; Economopoulou, A.A.; Economopoulos, A.P., E-mail: eco@otenet.gr
2013-11-15
Highlights: • A two-step (strategic and detailed optimal planning) methodology is used for solving complex MSW management problems. • A software package is outlined, which can be used for generating detailed optimal plans. • Sensitivity analysis compares alternative scenarios that address objections and/or wishes of local communities. • A case study shows the application of the above procedure in practice and demonstrates the results and benefits obtained. - Abstract: The paper describes a software system capable of formulating alternative optimal Municipal Solid Wastes (MSWs) management plans, each of which meets a set of constraints that may reflect selected objections and/ormore » wishes of local communities. The objective function to be minimized in each plan is the sum of the annualized capital investment and annual operating cost of all transportation, treatment and final disposal operations involved, taking into consideration the possible income from the sale of products and any other financial incentives or disincentives that may exist. For each plan formulated, the system generates several reports that define the plan, analyze its cost elements and yield an indicative profile of selected types of installations, as well as data files that facilitate the geographic representation of the optimal solution in maps through the use of GIS. A number of these reports compare the technical and economic data from all scenarios considered at the study area, municipality and installation level constituting in effect sensitivity analysis. The generation of alternative plans offers local authorities the opportunity of choice and the results of the sensitivity analysis allow them to choose wisely and with consensus. The paper presents also an application of this software system in the capital Region of Attica in Greece, for the purpose of developing an optimal waste transportation system in line with its approved waste management plan. The formulated plan was able to: (a) serve 113 Municipalities and Communities that generate nearly 2 million t/y of comingled MSW with distinctly different waste collection patterns, (b) take into consideration several existing waste transfer stations (WTS) and optimize their use within the overall plan, (c) select the most appropriate sites among the potentially suitable (new and in use) ones, (d) generate the optimal profile of each WTS proposed, and (e) perform sensitivity analysis so as to define the impact of selected sets of constraints (limitations in the availability of sites and in the capacity of their installations) on the design and cost of the ensuing optimal waste transfer system. The results show that optimal planning offers significant economic savings to municipalities, while reducing at the same time the present levels of traffic, fuel consumptions and air emissions in the congested Athens basin.« less
Ye, Bixiong; E, Xueli; Zhang, Lan
2015-01-01
To optimize non-regular drinking water quality indices (except Giardia and Cryptosporidium) of urban drinking water. Several methods including drinking water quality exceed the standard, the risk of exceeding standard, the frequency of detecting concentrations below the detection limit, water quality comprehensive index evaluation method, and attribute reduction algorithm of rough set theory were applied, redundancy factor of water quality indicators were eliminated, control factors that play a leading role in drinking water safety were found. Optimization results showed in 62 unconventional water quality monitoring indicators of urban drinking water, 42 water quality indicators could be optimized reduction by comprehensively evaluation combined with attribute reduction of rough set. Optimization of the water quality monitoring indicators and reduction of monitoring indicators and monitoring frequency could ensure the safety of drinking water quality while lowering monitoring costs and reducing monitoring pressure of the sanitation supervision departments.
Optimal Sensor Allocation for Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann
2004-01-01
Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.
Joint Services Electronics Program Annual Progress Report.
1987-10-15
polarizability of free carriers in the semiconductor perturb the index of refraction which can be detected in a Nomarski -type optical interferometer. For...interferomters. However, the charge probe relies on a different physical effect and operates by interferometrically detecting the phase change induced in an... Nomarski microscope systems. These techniques will be applied, eventually, in our real-time V.. scanning optical microscope described below. Recently
Exploring the Thermal Limits of IR-Based Automatic Whale Detection (ETAW)
2013-09-30
the northward humpback whale migration, which occurs annually rather close to shore near North Stradbroke Island, Queensland, Australia. Based on the...successive northward humpback whale migrations and collecting concurrent independent (double blind) visual observations (modified cue counting), a... Whale Detection (ETAW) Olaf Boebel P.O. Box 120161 27515 Bremerhaven Germany phone: +49 (471) 4831-1879 fax: +49 (471) 4831-1797 email
Detection of Outliers in TWSTFT Data Used in TAI
2009-11-01
41st Annual Precise Time and Time Interval (PTTI) Meeting 421 DETECTION OF OUTLIERS IN TWSTFT DATA USED IN TAI A...data in two-way satellite time and frequency transfer ( TWSTFT ) time links. In the case of TWSTFT data used to calculate International Atomic Time...data; that TWSTFT links can show an underlying slope which renders the standard treatment more difficult. Using phase and frequency filtering
Horowitz, A.J.; Elrick, K.A.; Smith, J.J.
2001-01-01
Suspended sediment, sediment-associated, total trace element, phosphorus (P), and total organic carbon (TOC) fluxes were determined for the Mississippi, Columbia, Rio Grande, and Colorado Basins for the study period (the 1996, 1997, and 1998 water years) as part of the US Geological Survey's redesigned National Stream Quality Accounting Network (NASQAN) programme. The majority (??? 70%) of Cu, Zn, Cr, Ni, Ba, P, As, Fe, Mn, and Al are transported in association with suspended sediment; Sr transport seems dominated by the dissolved phase, whereas the transport of Li and TOC seems to be divided equally between both phases. Average dissolved trace element levels are markedly lower than reported during the original NASQAN programme; this seems due to the use of 'clean' sampling, processing, and analytical techniques rather than to improvements in water quality. Partitioning between sediment and water for Ag, Pb, Cd, Cr, Co, V, Be, As, Sb, Hg, and Ti could not be estimated due to a lack of detectable dissolved concentrations in most samples. Elevated suspended sediment-associated Zn levels were detected in the Ohio River Basin and elevated Hg levels were detected in the Tennessee River, the former may affect the mainstem Mississippi River, whereas the latter probably do not. Sediment-associated concentrations of Ag, Cu, Pb, Zn, Cd, Cr, Co, Ba, Mo, Sb, Hg, and Fe are markedly elevated in the upper Columbia Basin, and appear to be detectable (Zn, Cd) as far downstream as the middle of the basin. These elevated concentrations seem to result from mining and/or mining-related activities. Consistently detectable concentrations of dissolved Se were found only in the Colorado River Basin. Calculated average annual suspended sediment fluxes at the mouths of the Mississippi and Rio Grande Basins were below, whereas those for the Columbia and Colorado Basins were above previously published annual values. Downstream suspended sediment-associated and total trace element fluxes increase in the Mississippi and Columbia Basins, whereas fluxes markedly decrease in the Colorado Basin. No consistent pattern in trace element fluxes was detected in the Rio Grande Basin.
Turner, Hugo C.; Osei-Atweneboana, Mike Y.; Walker, Martin; Tettevi, Edward J.; Churcher, Thomas S.; Asiedu, Odame; Biritwum, Nana-Kwadwo; Basáñez, María-Gloria
2013-01-01
Background It has been proposed that switching from annual to biannual (twice yearly) mass community-directed treatment with ivermectin (CDTI) might improve the chances of onchocerciasis elimination in some African foci. However, historically, relatively few communities have received biannual treatments in Africa, and there are no cost data associated with increasing ivermectin treatment frequency at a large scale. Collecting cost data is essential for conducting economic evaluations of control programmes. Some countries, such as Ghana, have adopted a biannual treatment strategy in selected districts. We undertook a study to estimate the costs associated with annual and biannual CDTI in Ghana. Methodology The study was conducted in the Brong-Ahafo and Northern regions of Ghana. Data collection was organized at the national, regional, district, sub-district and community levels, and involved interviewing key personnel and scrutinizing national records. Data were collected in four districts; one in which treatment is delivered annually, two in which it is delivered biannually, and one where treatment takes place biannually in some communities and annually in others. Both financial and economic costs were collected from the health care provider's perspective. Principal Findings The estimated cost of treating annually was US Dollars (USD) 0.45 per person including the value of time donated by the community drug distributors (which was estimated at USD 0.05 per person per treatment round). The cost of CDTI was approximately 50–60% higher in those districts where treatment was biannual than in those where it was annual. Large-scale mass biannual treatment was reported as being well received and considered sustainable. Conclusions/Significance This study provides rigorous evidence of the different costs associated with annual and biannual CDTI in Ghana which can be used to inform an economic evaluation of the debate on the optimal treatment frequency required to control (or eliminate) onchocerciasis in Africa. PMID:24069497
NASA Astrophysics Data System (ADS)
Pleijel, Håkan; Grundström, Maria; Karlsson, Gunilla Pihl; Karlsson, Per Erik; Chen, Deliang
2016-02-01
Annual anomalies in air pollutant concentrations, and deposition (bulk and throughfall) of sulphate, nitrate and ammonium, in the Gothenburg region, south-west Sweden, were correlated with optimized linear combinations of the yearly frequency of Lamb Weather Types (LWTs) to determine the extent to which the year-to-year variation in pollution exposure can be partly explained by weather related variability. Air concentrations of urban NO2, CO, PM10, as well as O3 at both an urban and a rural monitoring site, and the deposition of sulphate, nitrate and ammonium for the period 1997-2010 were included in the analysis. Linear detrending of the time series was performed to estimate trend-independent anomalies. These estimated anomalies were subtracted from observed annual values. Then the statistical significance of temporal trends with and without LWT adjustment was tested. For the pollutants studied, the annual anomaly was well correlated with the annual LWT combination (R2 in the range 0.52-0.90). Some negative (annual average [NO2], ammonia bulk deposition) or positive (average urban [O3]) temporal trends became statistically significant (p < 0.05) when the LWT adjustment was applied. In all the cases but one (NH4 throughfall, for which no temporal trend existed) the significance of temporal trends became stronger with LWT adjustment. For nitrate and ammonium, the LWT based adjustment explained a larger fraction of the inter-annual variation for bulk deposition than for throughfall. This is probably linked to the longer time scale of canopy related dry deposition processes influencing throughfall being explained to a lesser extent by LWTs than the meteorological factors controlling bulk deposition. The proposed novel methodology can be used by authorities responsible for air pollution management, and by researchers studying temporal trends in pollution, to evaluate e.g. the relative importance of changes in emissions and weather variability in annual air pollution exposure.
Quantifying yield gaps in wheat production in Russia
NASA Astrophysics Data System (ADS)
Schierhorn, Florian; Faramarzi, Monireh; Prishchepov, Alexander V.; Koch, Friedrich J.; Müller, Daniel
2014-08-01
Crop yields must increase substantially to meet the increasing demands for agricultural products. Crop yield increases are particularly important for Russia because low crop yields prevail across Russia’s widespread and fertile land resources. However, reliable data are lacking regarding the spatial distribution of potential yields in Russia, which can be used to determine yield gaps. We used a crop growth model to determine the yield potentials and yield gaps of winter and spring wheat at the provincial level across European Russia. We modeled the annual yield potentials from 1995 to 2006 with optimal nitrogen supplies for both rainfed and irrigated conditions. Overall, the results suggest yield gaps of 1.51-2.10 t ha-1, or 44-52% of the yield potential under rainfed conditions. Under irrigated conditions, yield gaps of 3.14-3.30 t ha-1, or 62-63% of the yield potential, were observed. However, recurring droughts cause large fluctuations in yield potentials under rainfed conditions, even when the nitrogen supply is optimal, particularly in the highly fertile black soil areas of southern European Russia. The highest yield gaps (up to 4 t ha-1) under irrigated conditions were detected in the steppe areas in southeastern European Russia along the border of Kazakhstan. Improving the nutrient and water supply and using crop breeds that are adapted to the frequent drought conditions are important for reducing yield gaps in European Russia. Our regional assessment helps inform policy and agricultural investors and prioritize research that aims to increase crop production in this important region for global agricultural markets.
NASA Astrophysics Data System (ADS)
Hoell, Simon; Omenzetter, Piotr
2017-07-01
Considering jointly damage sensitive features (DSFs) of signals recorded by multiple sensors, applying advanced transformations to these DSFs and assessing systematically their contribution to damage detectability and localisation can significantly enhance the performance of structural health monitoring systems. This philosophy is explored here for partial autocorrelation coefficients (PACCs) of acceleration responses. They are interrogated with the help of the linear discriminant analysis based on the Fukunaga-Koontz transformation using datasets of the healthy and selected reference damage states. Then, a simple but efficient fast forward selection procedure is applied to rank the DSF components with respect to statistical distance measures specialised for either damage detection or localisation. For the damage detection task, the optimal feature subsets are identified based on the statistical hypothesis testing. For damage localisation, a hierarchical neuro-fuzzy tool is developed that uses the DSF ranking to establish its own optimal architecture. The proposed approaches are evaluated experimentally on data from non-destructively simulated damage in a laboratory scale wind turbine blade. The results support our claim of being able to enhance damage detectability and localisation performance by transforming and optimally selecting DSFs. It is demonstrated that the optimally selected PACCs from multiple sensors or their Fukunaga-Koontz transformed versions can not only improve the detectability of damage via statistical hypothesis testing but also increase the accuracy of damage localisation when used as inputs into a hierarchical neuro-fuzzy network. Furthermore, the computational effort of employing these advanced soft computing models for damage localisation can be significantly reduced by using transformed DSFs.
Raman-spectroscopy-based chemical contaminant detection in milk powder
NASA Astrophysics Data System (ADS)
Dhakal, Sagar; Chao, Kuanglin; Qin, Jianwei; Kim, Moon S.
2015-05-01
Addition of edible and inedible chemical contaminants in food powders for purposes of economic benefit has become a recurring trend. In recent years, severe health issues have been reported due to consumption of food powders contaminated with chemical substances. This study examines the effect of spatial resolution used during spectral collection to select the optimal spatial resolution for detecting melamine in milk powder. Sample depth of 2mm, laser intensity of 200mw, and exposure time of 0.1s were previously determined as optimal experimental parameters for Raman imaging. Spatial resolution of 0.25mm was determined as the optimal resolution for acquiring spectral signal of melamine particles from a milk-melamine mixture sample. Using the optimal resolution of 0.25mm, sample depth of 2mm and laser intensity of 200mw obtained from previous study, spectral signal from 5 different concentration of milk-melamine mixture (1%, 0.5%, 0.1%, 0.05%, and 0.025%) were acquired to study the relationship between number of detected melamine pixels and corresponding sample concentration. The result shows that melamine concentration has a linear relation with detected number of melamine pixels with correlation coefficient of 0.99. It can be concluded that the quantitative analysis of powder mixture is dependent on many factors including physical characteristics of mixture, experimental parameters, and sample depth. The results obtained in this study are promising. We plan to apply the result obtained from this study to develop quantitative detection model for rapid screening of melamine in milk powder. This methodology can also be used for detection of other chemical contaminants in milk powders.
Proper Image Subtraction—Optimal Transient Detection, Photometry, and Hypothesis Testing
NASA Astrophysics Data System (ADS)
Zackay, Barak; Ofek, Eran O.; Gal-Yam, Avishay
2016-10-01
Transient detection and flux measurement via image subtraction stand at the base of time domain astronomy. Due to the varying seeing conditions, the image subtraction process is non-trivial, and existing solutions suffer from a variety of problems. Starting from basic statistical principles, we develop the optimal statistic for transient detection, flux measurement, and any image-difference hypothesis testing. We derive a closed-form statistic that: (1) is mathematically proven to be the optimal transient detection statistic in the limit of background-dominated noise, (2) is numerically stable, (3) for accurately registered, adequately sampled images, does not leave subtraction or deconvolution artifacts, (4) allows automatic transient detection to the theoretical sensitivity limit by providing credible detection significance, (5) has uncorrelated white noise, (6) is a sufficient statistic for any further statistical test on the difference image, and, in particular, allows us to distinguish particle hits and other image artifacts from real transients, (7) is symmetric to the exchange of the new and reference images, (8) is at least an order of magnitude faster to compute than some popular methods, and (9) is straightforward to implement. Furthermore, we present extensions of this method that make it resilient to registration errors, color-refraction errors, and any noise source that can be modeled. In addition, we show that the optimal way to prepare a reference image is the proper image coaddition presented in Zackay & Ofek. We demonstrate this method on simulated data and real observations from the PTF data release 2. We provide an implementation of this algorithm in MATLAB and Python.
Eslinger, Paul W; Cameron, Ian M; Dumais, Johannes Robert; Imardjoko, Yudi; Marsoem, Pujadi; McIntyre, Justin I; Miley, Harry S; Stoehlker, Ulrich; Widodo, Susilo; Woods, Vincent T
2015-10-01
BATAN Teknologi (BaTek) operates an isotope production facility in Serpong, Indonesia that supplies (99m)Tc for use in medical procedures. Atmospheric releases of (133)Xe in the production process at BaTek are known to influence the measurements taken at the closest stations of the radionuclide network of the International Monitoring System (IMS). The purpose of the IMS is to detect evidence of nuclear explosions, including atmospheric releases of radionuclides. The major xenon isotopes released from BaTek are also produced in a nuclear explosion, but the isotopic ratios are different. Knowledge of the magnitude of releases from the isotope production facility helps inform analysts trying to decide if a specific measurement result could have originated from a nuclear explosion. A stack monitor deployed at BaTek in 2013 measured releases to the atmosphere for several isotopes. The facility operates on a weekly cycle, and the stack data for June 15-21, 2013 show a release of 1.84 × 10(13) Bq of (133)Xe. Concentrations of (133)Xe in the air are available at the same time from a xenon sampler located 14 km from BaTek. An optimization process using atmospheric transport modeling and the sampler air concentrations produced a release estimate of 1.88 × 10(13) Bq. The same optimization process yielded a release estimate of 1.70 × 10(13) Bq for a different week in 2012. The stack release value and the two optimized estimates are all within 10% of each other. Unpublished production data and the release estimate from June 2013 yield a rough annual release estimate of 8 × 10(14) Bq of (133)Xe in 2014. These multiple lines of evidence cross-validate the stack release estimates and the release estimates based on atmospheric samplers. Copyright © 2015 Elsevier Ltd. All rights reserved.
Terrestrial Laser Scanning for Coastal Geomorphologic Research in Western Greece
NASA Astrophysics Data System (ADS)
Hoffmeister, D.; Tilly, N.; Curdt, C.; Aasen, H.; Ntageretzis, K.; Hadler, H.; Willershäuser, T.; Vött, A.; Bareth, G.
2012-07-01
We used terrestrial laser scanning (TLS) for (i) accurate volume estimations of dislocated boulders moved by high-energy impacts and for (ii) monitoring of annual coastal changes. In this contribution, we present three selected sites in Western Greece that were surveyed during a time span of four years (2008-2011). The Riegl LMS-Z420i laser scanner was used in combination with a precise DGPS system (Topcon HiPer Pro). Each scan position and a further target were recorded for georeferencing and merging of the point clouds. For the annual detection of changes, reference points for the base station of the DGPS system were marked. Our studies show that TLS is capable to accurately estimate volumes of boulders, which were dislocated and deposited inland from the littoral zone. The mass of each boulder was calculated from this 3D-reconstructed volume and according density data. The masses turned out to be considerably smaller than common estimated masses based on tape-measurements and according density approximations. The accurate mass data was incorporated into wave transport equations, which estimate wave velocities of high-energy impacts. As expected, these show smaller wave velocities, due to the incorporated smaller mass. Furthermore, TLS is capable to monitor annual changes on coastal areas. The changes are detected by comparing high resolution digital elevation models from every year. On a beach site, larger areas of sea-weed and sandy sediments are eroded. In contrast, bigger gravel with 30-50 cm diameter was accumulated. At the other area with bigger boulders and a different coastal configuration only slightly differences were detectable. In low-lying coastal areas and along recent beaches, post-processing of point clouds turned out to be more difficult, due to noise effects by water and shadowing effects. However, our studies show that the application of TLS in different littoral settings is an appropriate and promising tool. The combination of both instruments worked well and the annual positioning procedure with own survey point is precose for this purpose.
Peteiro, Jesus; Bouzas-Mosquera, Alberto; Broullon, Javier; Sanchez-Fernandez, Gabriel; Perez-Cebey, Lucia; Yañez, Juan; Martinez, Dolores; Vazquez-Rodriguez, Jose M
2016-08-01
Recommendations for testing in patients with low pretest probability of coronary artery disease differ in guidelines from no testing at all to different tests. The aim of this study was to assess the value of exercise echocardiography (ExE) to define outcome in this population. A retrospective analysis was conducted of 1,436 patients with low pretest probability of coronary artery disease (<15%) who underwent initial ExE. Overall mortality, major adverse cardiac events (MACEs), defined as cardiac death or nonfatal myocardial infarction, and revascularization during follow-up, were assessed. Ischemia (development of new wall motion abnormalities with exercise) and fixed wall motion abnormalities were measured. The mean age was 50 ± 12 years. Resting wall motion abnormalities were seen in 13 patients (0.9%) and ischemia in 108 (7.5%). During follow-up, 38 patients died, 10 of cardiac death (annualized death rate, 0.39%); 20 patients had MACEs (annualized MACE rate, 0.21%); and 48 patients (29 with ischemia) underwent revascularization (annualized revascularization rate, 0.51%). The number and percentage of MACEs in the abnormal and normal ExE groups were similar (two [1.7%] vs 18 [1.4%], P = .70), as was the annualized MACE rate (0.31% vs 0.21%, P = .50). Peak left ventricular ejection fraction exhibited a nonsignificant trend for predicting MACEs (P = .11). The number of studies needed to detect an abnormal finding was 12.6 and to detect a patient with extensive ischemia was 26.1. ExE offers limited prognostic information in patients with low pretest probability of coronary artery disease. The small number of abnormal findings on ExE and low event rates and the large number of studies needed to detect an abnormal finding limit further the value of imaging in this population. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Williamson, Grant J.; Prior, Lynda D.; Jolly, W. Matt; Cochrane, Mark A.; Murphy, Brett P.; Bowman, David M. J. S.
2016-03-01
Climate dynamics at diurnal, seasonal and inter-annual scales shape global fire activity, although difficulties of assembling reliable fire and meteorological data with sufficient spatio-temporal resolution have frustrated quantification of this variability. Using Australia as a case study, we combine data from 4760 meteorological stations with 12 years of satellite-derived active fire detections to determine day and night time fire activity, fire season start and end dates, and inter-annual variability, across 61 objectively defined climate regions in three climate zones (monsoon tropics, arid and temperate). We show that geographic patterns of landscape burning (onset and duration) are related to fire weather, resulting in a latitudinal gradient from the monsoon tropics in winter, through the arid zone in all seasons except winter, and then to the temperate zone in summer and autumn. Peak fire activity precedes maximum lightning activity by several months in all regions, signalling the importance of human ignitions in shaping fire seasons. We determined median daily McArthur forest fire danger index (FFDI50) for days and nights when fires were detected: FFDI50 varied substantially between climate zones, reflecting effects of fire management in the temperate zone, fuel limitation in the arid zone and abundance of flammable grasses in the monsoon tropical zone. We found correlations between the proportion of days when FFDI exceeds FFDI50 and the Southern Oscillation index across the arid zone during spring and summer, and Indian Ocean dipole mode index across south-eastern Australia during summer. Our study demonstrates that Australia has a long fire weather season with high inter-annual variability relative to all other continents, making it difficult to detect long term trends. It also provides a way of establishing robust baselines to track changes to fire seasons, and supports a previous conceptual model highlighting multi-temporal scale effects of climate in shaping continental-scale pyrogeography.
Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons.
Bernardi, Davide; Lindner, Benjamin
2017-06-30
Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.
Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons
NASA Astrophysics Data System (ADS)
Bernardi, Davide; Lindner, Benjamin
2017-06-01
Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.
Simple summation rule for optimal fixation selection in visual search.
Najemnik, Jiri; Geisler, Wilson S
2009-06-01
When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.
Gao, Chen-chen; Li, Feng-min; Lu, Lun; Sun, Yue
2015-10-01
For the determination of trace amounts of phthalic acid esters (PAEs) in complex seawater matrix, a stir bar sorptive extraction gas chromatography mass spectrometry (SBSE-GC-MS) method was established. Dimethyl phthalate (DMP), diethyl phthalate (DEP), dibutyl phthalate (DBP), butyl benzyl phthalate (BBP), dibutyl phthalate (2-ethylhexyl) phthalate (DEHP) and dioctyl phthalate (DOP) were selected as study objects. The effects of extraction time, amount of methanol, amount of sodium chloride, desorption time and desorption solvent were optimized. The method of SBSE-GC-MS was validated through recoveries and relative standard deviation. The optimal extraction time was 2 h. The optimal methanol content was 10%. The optimal sodium chloride content was 5% . The optimal desorption time was 50 min. The optimal desorption solvent was the mixture of methanol to acetonitrile (4:1, volume: volume). The linear relationship between the peak area and the concentration of PAEs was relevant. The correlation coefficients were greater than 0.997. The detection limits were between 0.25 and 174.42 ng x L(-1). The recoveries of different concentrations were between 56.97% and 124.22% . The relative standard deviations were between 0.41% and 14.39%. On the basis of the method, several estuaries water sample of Jiaozhou Bay were detected. DEP was detected in all samples, and the concentration of BBP, DEHP and DOP were much higher than the rest.
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-01-01
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE. PMID:27447635
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-07-19
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE.
Ogunyemi, Omolola; Kermah, Dulcie
2015-01-01
Annual eye examinations are recommended for diabetic patients in order to detect diabetic retinopathy and other eye conditions that arise from diabetes. Medically underserved urban communities in the US have annual screening rates that are much lower than the national average and could benefit from informatics approaches to identify unscreened patients most at risk of developing retinopathy. Using clinical data from urban safety net clinics as well as public health data from the CDC's National Health and Nutrition Examination Survey, we examined different machine learning approaches for predicting retinopathy from clinical or public health data. All datasets utilized exhibited a class imbalance. Classifiers learned on the clinical data were modestly predictive of retinopathy with the best model having an AUC of 0.72, sensitivity of 69.2% and specificity of 55.9%. Classifiers learned on public health data were not predictive of retinopathy. Successful approaches to detecting latent retinopathy using machine learning could help safety net and other clinics identify unscreened patients who are most at risk of developing retinopathy and the use of ensemble classifiers on clinical data shows promise for this purpose.
Temporal changes and variability in temperature series over Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Suhaila, Jamaludin
2015-02-01
With the current concern over climate change, the descriptions on how temperature series changed over time are very useful. Annual mean temperature has been analyzed for several stations over Peninsular Malaysia. Non-parametric statistical techniques such as Mann-Kendall test and Theil-Sen slope estimation are used primarily for assessing the significance and detection of trends, while a nonparametric Pettitt's test and sequential Mann-Kendall test are adopted to detect any abrupt climate change. Statistically significance increasing trends for annual mean temperature are detected for almost all studied stations with the magnitude of significant trend varied from 0.02°C to 0.05°C per year. The results shows that climate over Peninsular Malaysia is getting warmer than before. In addition, the results of the abrupt changes in temperature using Pettitt's and sequential Mann-Kendall test reveal the beginning of trends which can be related to El Nino episodes that occur in Malaysia. In general, the analysis results can help local stakeholders and water managers to understand the risks and vulnerabilities related to climate change in terms of mean events in the region.
Clark, G.M.
1997-01-01
During May and June 1994, 37 water samples were collected at 31 sites in the upper Snake River Basin and analyzed for 83 pesticides and pesticide metabolites. EPTC, atrazine, and the atrazine metabolite deethylated atrazine were the most frequently detected and were found in 30, 20, and 13 of the samples, respectively. Fifteen additional pesticides were detected at least once. All the compounds detected were at concentrations of less than 1 microgram per liter. Total annual applications of EPTC and atrazine within subbasins and their instantaneous instream fluxes have a logarithmic relation with coefficients of determination (R2 values) of 0.55 and 0.62, respectively. At the time of sampling, the median daily flux of EPTC was about O. 0001% of the annual amount applied in a subbasin, whereas the median daily flux of atrazine was between 0.001 and 0.01%. The difference in fluxes between EPTC and atrazine probably results from differences in their physical properties and in the method and timing of application.
Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P
2010-10-30
Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.
Optimized velocity distributions for direct dark matter detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibarra, Alejandro; Rappelt, Andreas, E-mail: ibarra@tum.de, E-mail: andreas.rappelt@tum.de
We present a method to calculate, without making assumptions about the local dark matter velocity distribution, the maximal and minimal number of signal events in a direct detection experiment given a set of constraints from other direct detection experiments and/or neutrino telescopes. The method also allows to determine the velocity distribution that optimizes the signal rates. We illustrate our method with three concrete applications: i) to derive a halo-independent upper limit on the cross section from a set of null results, ii) to confront in a halo-independent way a detection claim to a set of null results and iii) tomore » assess, in a halo-independent manner, the prospects for detection in a future experiment given a set of current null results.« less
Optimization of Breast Tomosynthesis Imaging Systems for Computer-Aided Detection
2011-05-01
R. Saunders, E. Samei, C. Badea, H. Yuan, K. Ghaghada, Y. Qi, L. Hedlund, and S. Mukundan, “Optimization of dual energy contrast enhanced breast...14 4 1 Introduction This is the final report for this body of research. Screen-film mammography and...digital mammography have been used for over 30 years in the early detection of cancer. The combination of screening and adjuvant therapies have led to
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
An enzyme-linked immunosorbent assay for detection of botulinum toxin-antibodies.
Dressler, Dirk; Gessler, Frank; Tacik, Pawel; Bigalke, Hans
2014-09-01
Antibodies against botulinum neurotoxin (BNT-AB) can be detected by the mouse protection assay (MPA), the hemidiaphragm assay (HDA), and by enzyme-linked immunosorbent assays (ELISA). Both MPA and HDA require sacrifice of experimental animals, and they are technically delicate and labor intensive. We introduce a specially developed ELISA for detection of BNT-A-AB and evaluate it against the HDA. Thirty serum samples were tested by HDA and by the new ELISA. Results were compared, and receiver operating characteristic analyses were used to optimize ELISA parameter constellation to obtain either maximal overall accuracy, maximal test sensitivity, or maximal test specificity. When the ELISA is optimized for sensitivity, a sensitivity of 100% and a specificity of 55% can be reached. When it is optimized for specificity, a specificity of 100% and a sensitivity of 90% can be obtained. We present an ELISA for BNT-AB detection that can be-for the first time-customized for special purposes. Adjusted for optimal sensitivity, it reaches the best sensitivity of all BNT-AB tests available. Using the new ELISA together with the HDA as a confirmation test allows testing for BNT-AB in large numbers of patients receiving BT drugs in an economical, fast, and more animal-friendly way. © 2014 International Parkinson and Movement Disorder Society.
Multiple crack detection in 3D using a stable XFEM and global optimization
NASA Astrophysics Data System (ADS)
Agathos, Konstantinos; Chatzi, Eleni; Bordas, Stéphane P. A.
2018-02-01
A numerical scheme is proposed for the detection of multiple cracks in three dimensional (3D) structures. The scheme is based on a variant of the extended finite element method (XFEM) and a hybrid optimizer solution. The proposed XFEM variant is particularly well-suited for the simulation of 3D fracture problems, and as such serves as an efficient solution to the so-called forward problem. A set of heuristic optimization algorithms are recombined into a multiscale optimization scheme. The introduced approach proves effective in tackling the complex inverse problem involved, where identification of multiple flaws is sought on the basis of sparse measurements collected near the structural boundary. The potential of the scheme is demonstrated through a set of numerical case studies of varying complexity.
NASA Astrophysics Data System (ADS)
Su, X.; Shum, C. K.; Guo, J.; Howat, I.; Jezek, K. C.; Luo, Z.; Zhou, Z.
2017-12-01
Satellite altimetry has been used to monitor elevation and volume change of polar ice sheets since the 1990s. In order to derive mass change from the measured volume change, different density assumptions are commonly used in the research community, which may cause discrepancies on accurately estimating ice sheets mass balance. In this study, we investigate the inter-annual anomalies of mass change from GRACE gravimetry and elevation change from Envisat altimetry during years 2003-2009, with the objective of determining inter-annual variations of snow/firn density over the Greenland ice sheet (GrIS). High positive correlations (0.6 or higher) between these two inter-annual anomalies at are found over 93% of the GrIS, which suggests that both techniques detect the same geophysical process at the inter-annual timescale. Interpreting the two anomalies in terms of near surface density variations, over 80% of the GrIS, the inter-annual variation in average density is between the densities of snow and pure ice. In particular, at the Summit of Central Greenland, we validate the satellite data estimated density with the in situ data available from 75 snow pits and 9 ice cores. This study provides constraints on the currently applied density assumptions for the GrIS.
Adam, Asrul; Mohd Tumari, Mohd Zaidi; Mohamad, Mohd Saberi
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236