NASA Astrophysics Data System (ADS)
Leu, J.
2012-12-01
A former natural gas processing station is impacted with TPH and BTEX in groundwater. Air sparging and soil vapor extraction (AS/AVE) remediation systems had previously been operated at the site. Currently, a groundwater extraction and treatment system is operated to remove the chemicals of concern (COC) and contain the groundwater plume from migrating offsite. A remedial process optimization (RPO) was conducted to evaluate the effectiveness of historic and current remedial activities and recommend an approach to optimize the remedial activities. The RPO concluded that both the AS/SVE system and the groundwater extraction system have reached the practical limits of COC mass removal and COC concentration reduction. The RPO recommended an in-situ chemical oxidation (ISCO) study to evaluate the best ISCO oxidant and approach. An ISCO bench test was conducted to evaluate COC removal efficiency and secondary impacts to recommend an application dosage. Ozone was selected among four oxidants based on implementability, effectiveness, safety, and media impacts. The bench test concluded that ozone demand was 8 to 12 mg ozone/mg TPH and secondary groundwater by-products of ISCO include hexavalent chromium and bromate. The pH also increased moderately during ozone sparging and the TDS increased by approximately 20% after 48 hours of ozone treatment. Prior to the ISCO pilot study, a capture zone analysis (CZA) was conducted to ensure containment of the injected oxidant within the existing groundwater extraction system. The CZA was conducted through a groundwater flow modeling using MODFLOW. The model indicated that 85%, 90%, and 95% of an injected oxidant could be captured when a well pair is injecting and extracting at 2, 5, and 10 gallons per minute, respectively. An ISCO pilot test using ozone was conducted to evaluate operation parameters for ozone delivery. The ozone sparging system consisted of an ozone generator capable of delivering 6 lbs/day ozone through two ozone sparging wells. Startup test was conducted to optimize sparging pressure and flow rate and evaluate radius of influence (ROI) and pulsed sparging frequency. The startup test results indicated the system is optimized at 6 psi pressure and 3 cfm flow rate at ozone sparging rate of 2 lbs/day at each sparging location. The results also indicated a maximized ROI of 20 ft was reached and pulsed sparging frequency was estimated to be 60 minutes. The results at the completion of the pilot test concluded that TPH concentrations in groundwater decreased by 97% during the two months of ozone sparging, but did rebound to near baseline levels for most groundwater monitoring wells. Concentrations of hexavalent chromium and bromate increased from non-detect to 44 and 110 μg/L, respectively, during the ozone sparging but attenuated to non-detect concentrations within three months following the system shut down. Field measurements during the pilot study displayed an increasing trend of both oxidation-reduction potential (ORP) and dissolved oxygen (DO). After ozone sparging was complete, the ORP and DO in the saturated zone returned to near baseline levels. Based on the results of the pilot study, a full scale ISCO using ozone system was recommended.
[Simulation on remediation of benzene contaminated groundwater by air sparging].
Fan, Yan-Ling; Jiang, Lin; Zhang, Dan; Zhong, Mao-Sheng; Jia, Xiao-Yang
2012-11-01
Air sparging (AS) is one of the in situ remedial technologies which are used in groundwater remediation for pollutions with volatile organic compounds (VOCs). At present, the field design of air sparging system was mainly based on experience due to the lack of field data. In order to obtain rational design parameters, the TMVOC module in the Petrasim software package, combined with field test results on a coking plant in Beijing, is used to optimize the design parameters and simulate the remediation process. The pilot test showed that the optimal injection rate was 23.2 m3 x h(-1), while the optimal radius of influence (ROI) was 5 m. The simulation results revealed that the pressure response simulated by the model matched well with the field test results, which indicated a good representation of the simulation. The optimization results indicated that the optimal injection location was at the bottom of the aquifer. Furthermore, simulated at the optimized injection location, the optimal injection rate was 20 m3 x h(-1), which was in accordance with the field test result. Besides, 3 m was the optimal ROI, less than the field test results, and the main reason was that field test reflected the flow behavior at the upper space of groundwater and unsaturated area, in which the width of flow increased rapidly, and became bigger than the actual one. With the above optimized operation parameters, in addition to the hydro-geological parameters measured on site, the model simulation result revealed that 90 days were needed to remediate the benzene from 371 000 microg x L(-1) to 1 microg x L(-1) for the site, and that the opeation model in which the injection wells were progressively turned off once the groundwater around them was "clean" was better than the one in which all the wells were kept operating throughout the remediation process.
Centrifugal study of zone of influence during air-sparging.
Hu, Liming; Meegoda, Jay N; Du, Jianting; Gao, Shengyan; Wu, Xiaofeng
2011-09-01
Air sparging (AS) is one of the groundwater remediation techniques for remediating volatile organic compounds (VOCs) in saturated soil. However, in spite of the success of air sparging as a remediation technique for the cleanup of contaminated soils, to date, the fundamental mechanisms or the physics of air flow through porous media is not well understood. In this study, centrifugal modeling tests were performed to investigate air flow rates and the evolution of the zone of influence during the air sparging under various g-levels. The test results show that with the increase in sparging pressure the mass flow rate of the air sparging volume increases. The air mass flow rate increases linearly with the effective sparging pressure ratio, which is the difference between sparging pressure and hydrostatic pressure normalized with respect to the effective overburden pressure at the sparging point. Also the slope of mass flow rate with effective sparging pressure ratio increases with higher g-levels. This variation of the slope of mass flow rate of air sparging volume versus effective sparging pressure ratio, M, is linear with g-level confirming that the air flow through soil for a given effective sparging pressure ratio only depends on the g-level. The test results also show that with increasing sparging pressure, the zone of influence (ZOI), which consists of the width at the tip of the cone or lateral intrusion and the cone angle, will lead to an increase in both lateral intrusion and the cone angle. With a further increase in air injection pressure, the cone angle reaches a constant value while the lateral intrusion becomes the main contributor to the enlargement of the ZOI. However, beyond a certain value of effective sparging pressure ratio, there is no further enlargement of the ZOI.
Javadi, Najvan; Ashtiani, Farzin Zokaee; Fouladitajar, Amir; Zenooz, Alireza Moosavi
2014-06-01
Response surface methodology (RSM) and central composite design (CCD) were applied for modeling and optimization of cross-flow microfiltration of Chlorella sp. suspension. The effects of operating conditions, namely transmembrane pressure (TMP), feed flow rate (Qf) and optical density of feed suspension (ODf), on the permeate flux and their interactions were determined. Analysis of variance (ANOVA) was performed to test the significance of response surface model. The effect of gas sparging technique and different gas-liquid two phase flow regimes on the permeate flux was also investigated. Maximum flux enhancement was 61% and 15% for Chlorella sp. with optical densities of 1.0 and 3.0, respectively. These results indicated that gas sparging technique was more efficient in low concentration microalgae microfiltration in which up to 60% enhancement was achieved in slug flow pattern. Additionally, variations in the transmission of exopolysaccharides (EPS) and its effects on the fouling phenomenon were evaluated. Copyright © 2014 Elsevier Ltd. All rights reserved.
Single-cell computational analysis of light harvesting in a flat-panel photo-bioreactor.
Loomba, Varun; Huber, Gregor; von Lieres, Eric
2018-01-01
Flat-panel photo-bioreactors (PBRs) are customarily applied for investigating growth of microalgae. Optimal design and operation of such reactors is still a challenge due to complex non-linear combinations of various impact factors, particularly hydrodynamics, light irradiation, and cell metabolism. A detailed analysis of single-cell light reception can lead to novel insights into the complex interactions of light exposure and algae movement in the reactor. The combined impacts of hydrodynamics and light irradiation on algae cultivation in a flat-panel PBR were studied by tracing the light exposure of individual cells over time. Hydrodynamics and turbulent mixing in this air-sparged bioreactor were simulated using the Eulerian approach for the liquid phase and a slip model for the gas phase velocity profiles. The liquid velocity was then used for tracing single cells and their light exposure, using light intensity profiles obtained from solving the radiative transfer equation at different wavelengths. The residence times of algae cells in defined dark and light zones of the PBR were statistically analyzed for different algal concentrations and sparging rates. The results indicate poor mixing caused by the reactor design which can be only partially improved by increased sparging rates. The results provide important information for optimizing algal biomass productivity by improving bioreactor design and operation and can further be utilized for an in-depth analysis of algal growth by using advanced models of cell metabolism.
Use of Surfactants to Decrease Air-Water Interfacial Tension During Sparging
Air sparging is a remediation procedure of injecting air into polluted ground water. The primary intention of air sparging is to promote biodegradation of volatile organic compounds (VOCs) in the groundwater passing through the treatment sector. Sparging treatment efficiency dep...
NASA Astrophysics Data System (ADS)
Choi, Jae-Kyeong; Kim, Heonki; Kwon, Hobin; Annable, Michael D.
2018-03-01
The effect of groundwater viscosity control on the performance of surfactant-enhanced air sparging (SEAS) was investigated using 1- and 2-dimensional (1-D and 2-D) bench-scale physical models. The viscosity of groundwater was controlled by a thickener, sodium carboxymethylcellulose (SCMC), while an anionic surfactant, sodium dodecylbenzene sulfonate (SDBS), was used to control the surface tension of groundwater. When resident DI water was displaced with a SCMC solution (500 mg/L), a SDBS solution (200 mg/L), and a solution with both SCMC (500 mg/L) and SDBS (200 mg/L), the air saturation for sand-packed columns achieved by air sparging increased by 9.5%, 128%, and 154%, respectively, (compared to that of the DI water-saturated column). When the resident water contained SCMC, the minimum air pressure necessary for air sparging processes increased, which is considered to be responsible for the increased air saturation. The extent of the sparging influence zone achieved during the air sparging process using the 2-D model was also affected by viscosity control. Larger sparging influence zones (de-saturated zone due to air injection) were observed for the air sparging processes using the 2-D model initially saturated with high-viscosity solutions, than those without a thickener in the aqueous solution. The enhanced air saturations using SCMC for the 1-D air sparging experiment improved the degradative performance of gaseous oxidation agent (ozone) during air sparging, as measured by the disappearance of fluorescence (fluorescein sodium salt). Based on the experimental evidence generated in this study, the addition of a thickener in the aqueous solution prior to air sparging increased the degree of air saturation and the sparging influence zone, and enhanced the remedial potential of SEAS for contaminated aquifers.
Use of Surfactants to Decrease Air-Water Interfacial Tension During Sparging (OKC, OK)
Air sparging is a remediation procedure of injecting air into polluted ground water. The primary intention of air sparging is to promote biodegradation of volatile organic compounds (VOCs) in the groundwater passing through the treatment sector. Sparging treatment efficiency dep...
Air sparging in low permeability soils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marley, M.C.
1996-08-01
Sparging technology is rapidly growing as a preferred, low cost remediation technique of choice at sites across the United States. The technology is considered to be commercially available and relatively mature. However, the maturity is based on the number of applications of the technology as opposed to the degree of understanding of the mechanisms governing the sparging process. Few well documented case studies exist on the long term operation of the technology. Sparging has generally been applied using modified monitoring well designs in uniform, coarse grained soils. The applicability of sparging for the remediation of DNAPLs in low permeability mediamore » has not been significantly explored. Models for projecting the performance of sparging systems in either soils condition are generally simplistic but can be used to provide general insight into the effects of significant changes in soil and fluid properties. The most promising sparging approaches for the remediation of DNAPLs in low permeability media are variations or enhancements to the core technology. Recirculatory sparging systems, sparging/biosparging trenches or curtains and heating or induced fracturing techniques appear to be the most promising technology variants for this type of soil. 21 refs., 9 figs.« less
Kraemer, Jeremy T; Bagley, David M
2006-09-01
Dissolved H(2) and CO(2) were measured by an improved manual headspace-gas chromatographic method during fermentative H(2) production with N(2) sparging. Sparging increased the yield from 1.3 to 1.8 mol H(2)/mol glucose converted, although H(2) and CO(2) were still supersaturated regardless of sparging. The common assumption that sparging increases the H(2) yield because of lower dissolved H(2) concentrations may be incorrect, because H(2) was not lowered into the range necessary to affect the relevant enzymes. More likely, N(2) sparging decreased the rate of H(2) consumption via lower substrate concentrations.
Choi, Jae-Kyeong; Kim, Heonki; Kwon, Hobin; Annable, Michael D
2018-03-01
The effect of groundwater viscosity control on the performance of surfactant-enhanced air sparging (SEAS) was investigated using 1- and 2-dimensional (1-D and 2-D) bench-scale physical models. The viscosity of groundwater was controlled by a thickener, sodium carboxymethylcellulose (SCMC), while an anionic surfactant, sodium dodecylbenzene sulfonate (SDBS), was used to control the surface tension of groundwater. When resident DI water was displaced with a SCMC solution (500 mg/L), a SDBS solution (200 mg/L), and a solution with both SCMC (500 mg/L) and SDBS (200 mg/L), the air saturation for sand-packed columns achieved by air sparging increased by 9.5%, 128%, and 154%, respectively, (compared to that of the DI water-saturated column). When the resident water contained SCMC, the minimum air pressure necessary for air sparging processes increased, which is considered to be responsible for the increased air saturation. The extent of the sparging influence zone achieved during the air sparging process using the 2-D model was also affected by viscosity control. Larger sparging influence zones (de-saturated zone due to air injection) were observed for the air sparging processes using the 2-D model initially saturated with high-viscosity solutions, than those without a thickener in the aqueous solution. The enhanced air saturations using SCMC for the 1-D air sparging experiment improved the degradative performance of gaseous oxidation agent (ozone) during air sparging, as measured by the disappearance of fluorescence (fluorescein sodium salt). Based on the experimental evidence generated in this study, the addition of a thickener in the aqueous solution prior to air sparging increased the degree of air saturation and the sparging influence zone, and enhanced the remedial potential of SEAS for contaminated aquifers. Copyright © 2018 Elsevier B.V. All rights reserved.
Effect of biogas sparging on the performance of bio-hydrogen reactor over a long-term operation.
Nualsri, Chatchawin; Kongjan, Prawit; Reungsang, Alissara; Imai, Tsuyoshi
2017-01-01
This study aimed to enhance hydrogen production from sugarcane syrup by biogas sparging. Two-stage continuous stirred tank reactor (CSTR) and upflow anaerobic sludge blanket (UASB) reactor were used to produce hydrogen and methane, respectively. Biogas produced from the UASB was used to sparge into the CSTR. Results indicated that sparging with biogas increased the hydrogen production rate (HPR) by 35% (from 17.1 to 23.1 L/L.d) resulted from a reduction in the hydrogen partial pressure. A fluctuation of HPR was observed during a long term monitoring because CO2 in the sparging gas and carbon source in the feedstock were consumed by Enterobacter sp. to produce succinic acid without hydrogen production. Mixed gas released from the CSTR after the sparging can be considered as bio-hythane (H2+CH4). In addition, a continuous sparging biogas into CSTR release a partial pressure in the headspace of the methane reactor. In consequent, the methane production rate is increased.
Effect of biogas sparging on the performance of bio-hydrogen reactor over a long-term operation
Nualsri, Chatchawin; Kongjan, Prawit; Imai, Tsuyoshi
2017-01-01
This study aimed to enhance hydrogen production from sugarcane syrup by biogas sparging. Two-stage continuous stirred tank reactor (CSTR) and upflow anaerobic sludge blanket (UASB) reactor were used to produce hydrogen and methane, respectively. Biogas produced from the UASB was used to sparge into the CSTR. Results indicated that sparging with biogas increased the hydrogen production rate (HPR) by 35% (from 17.1 to 23.1 L/L.d) resulted from a reduction in the hydrogen partial pressure. A fluctuation of HPR was observed during a long term monitoring because CO2 in the sparging gas and carbon source in the feedstock were consumed by Enterobacter sp. to produce succinic acid without hydrogen production. Mixed gas released from the CSTR after the sparging can be considered as bio-hythane (H2+CH4). In addition, a continuous sparging biogas into CSTR release a partial pressure in the headspace of the methane reactor. In consequent, the methane production rate is increased. PMID:28207755
NASA Astrophysics Data System (ADS)
Barker, J.; Nelson, L.; Doughty, C.; Thomson, N.; Lambert, J.
2009-05-01
In the shallow, rather homogeneous, unconfined Borden sand aquifer, field trials of air sparging (Tomlinson et al., 2003) and pulsed air sparging (Lambert et al., 2009) have been conducted, the latter to remediate a residual gasoline source emplaced below the water table. As well, a supersaturated (with CO2) water injection (SWI) technology, using the inVentures inFusion system, has been trialed in two phases: 1. in the uncontaminated sand aquifer to evaluate the radius of influence, extent of lateral gas movement and gas saturation below the water table, and 2. in a sheet pile cell in the Borden aquifer to evaluate the recovery of volatile hydrocarbon components (pentane and hexane) of an LNAPL emplaced below the water table (Nelson et al., 2008). The SWI injects water supersaturated with CO2. The supersaturated injected water moves laterally away from the sparge point, releasing CO2 over a wider area than does gas sparging from a single well screen. This presentation compares these two techniques in terms of their potential for remediating volatile NAPL components occurring below the water table in a rather homogeneous sand aquifer. Air sparging created a significantly greater air saturation in the vicinity of the sparge well than did the CO2 system (60 percent versus 16 percent) in the uncontaminated Borden aquifer. However, SWI pushed water, still supersaturated with CO2, up to about 2.5 m from the injection well. This would seem to provide a considerable advantage over air sparging from a point, in that gas bubbles are generated at a much larger radius from the point of injection with SWI and so should involve additional gas pathways through a residual NAPL. Overall, air sparging created a greater area of influence, defined by measurable air saturation in the aquifer, but air sparging also injected about 12 times more gas than was injected in the SWI trials. The pulsed air sparging at Borden (Lambert et al.) removed about 20 percent (4.6 kg) of gasoline hydrocarbons, mainly pentane and hexane, from the residual gasoline via sparging. A similar mass was estimated to have been removed by aerobic biodegradation. The extent of volatile recovery needs to be better defined and so post-sparging coring and analysis of residual LNAPL is underway. Impressively, the second SWI trial recovered more than 60 percent of the pentane-hexane from the NAPL. In both field experiments there was potential for minor additional recovery if the system had been operated longer. Comparison of efficiency of the pulsed air sparging and SWI systems is difficult in that the initial LNAPL residuals have different chemistry, but similar distribution, different volumes of gas were used, and biodegradation accounted for a significant removal of hydrocarbons only in the air sparging system. The SWI trial recovered an impressive portion of the volatile LNAPL, while using considerably less gas than the air sparging system, but the SWI delivery system was both more complex and more expensive than the air sparging system. Additional trials are underway in more complex aquifers to further assess the performance of the SWI technology, including costs and practical limitations.
Surfactants reduce platelet-bubble and platelet-platelet binding induced by in vitro air embolism.
Eckmann, David M; Armstead, Stephen C; Mardini, Feras
2005-12-01
The effect of gas bubbles on platelet behavior is poorly characterized. The authors assessed platelet-bubble and platelet-platelet binding in platelet-rich plasma in the presence and absence of bubbles and three surface-active compounds. Platelet-rich plasma was prepared from blood drawn from 16 volunteers. Experimental groups were surfactant alone, sparging (microbubble embolization) alone, sparging with surfactant, and neither sparging nor surfactant. The surfactants were Pluronic F-127 (Molecular Probes, Eugene, OR), Perftoran (OJSC SPC Perftoran, Moscow, Russia), and Dow Corning Antifoam 1510US (Dow Corning, Midland, MI). Videomicroscopy images of specimens drawn through rectangular glass microcapillaries on an inverted microscope and Coulter counter measurements were used to assess platelet-bubble and platelet-platelet binding, respectively, in calcium-free and recalcified samples. Histamine-induced and adenosine diphosphate-induced platelet-platelet binding were measured in unsparged samples. Differences between groups were considered significant for P < 0.05 using analysis of variance and the Bonferroni correction. Sixty to 100 platelets adhered to bubbles in sparged, surfactant-free samples. With sparging and surfactant, few platelets adhered to bubbles. Numbers of platelet singlets and multimers not adherent to bubbles were different (P < 0.05) compared both with unsparged samples and sparged samples without surfactant. No significant platelet-platelet binding occurred in uncalcified, sparged samples, although 20-30 platelets adhered to bubbles. Without sparging, histamine and adenosine diphosphate provoked platelet-platelet binding with and without surfactants present. Sparging causes platelets to bind to air bubbles and each other. Surfactants added before sparging attenuate platelet-bubble and platelet-platelet binding. Surfactants may have a clinical role in attenuating gas embolism-induced platelet-bubble and platelet-platelet binding.
Li, Ziyin; Xu, Xindi; Xu, Xiaochen; Yang, FengLin; Zhang, ShuShen
2015-12-01
A submerged anaerobic ammonium oxidizing (Anammox) membrane bioreactor with recycling biogas sparging for alleviating membrane fouling has been successfully operated for 100d. Based on the batch tests, a recycling biogas sparging rate at 0.2m(3)h(-1) was fixed as an ultimate value for the sustainable operation. The mixed liquor volatile suspended solid (VSS) of the inoculum for the long operation was around 3000mgL(-1). With recycling biogas sparging rate increasing stepwise from 0 to 0.2m(3)h(-1), the reactor reached an influent total nitrogen (TN) up to 1.7gL(-1), a stable TN removal efficiency of 83% and a maximum specific Anammox activity (SAA) of 0.56kg TNkg(-1) VSSd(-1). With recycling biogas sparging rate at 0.2 m(3) h(-1) (corresponding to an aeration intensity of 118m(3)m(-2)h(-1)), the membrane operation circle could prolong by around 20 times compared to that without gas sparging. Furthermore, mechanism of membrane fouling was proposed. And with recycling biogas sparging, the VSS and EPS content increasing rate in cake layer were far less than the ones without biogas sparging. The TN removal performance and sustainable membrane operation of this system showed the appealing potential of the submerged Anammox MBR with recycling biogas sparging in treating high-strength nitrogen-containing wastewaters. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mohamed, A M I; El-menshawy, Nabil; Saif, Amany M
2007-05-01
Pollutants in the form of non-aqueous phase liquids (NAPLs), such as petroleum products, pose a serious threat to the soil and groundwater. A mathematical model was derived to study the unsteady pollutant concentrations through water saturated contaminated soil under air sparging conditions for different NAPLs and soil properties. The comparison between the numerical model results and the published experimental results showed acceptable agreement. Furthermore, an experimental study was conducted to remove NAPLs from the contaminated soil using the sparging air technique, considering the sparging air velocity, air temperature, soil grain size and different contaminant properties. This study showed that sparging air at ambient temperature through the contaminated soil can remove NAPLs, however, employing hot air sparging can provide higher contaminant removal efficiency, by about 9%. An empirical correlation for the volatilization mass transfer coefficient was developed from the experimental results. The dimensionless numbers used were Sherwood number (Sh), Peclet number (Pe), Schmidt number (Sc) and several physical-chemical properties of VOCs and porous media. Finally, the estimated volatilization mass transfer coefficient was used for calculation of the influence of heated sparging air on the spreading of the NAPL plume through the contaminated soil.
Hydrodynamic effects of air sparging on hollow fiber membranes in a bubble column reactor.
Xia, Lijun; Law, Adrian Wing-Keung; Fane, Anthony G
2013-07-01
Air sparging is now a standard approach to reduce concentration polarization and fouling of membrane modules in membrane bioreactors (MBRs). The hydrodynamic shear stresses, bubble-induced turbulence and cross flows scour the membrane surfaces and help reduce the deposit of foulants onto the membrane surface. However, the detailed quantitative knowledge on the effect of air sparging remains lacking in the literature due to the complex hydrodynamics generated by the gas-liquid flows. To date, there is no valid model that describes the relationship between the membrane fouling performance and the flow hydrodynamics. The present study aims to examine the impact of hydrodynamics induced by air sparging on the membrane fouling mitigation in a quantitative manner. A modelled hollow fiber module was placed in a cylindrical bubble column reactor at different axial heights with the trans-membrane pressure (TMP) monitored under constant flux conditions. The configuration of bubble column without the membrane module immersed was identical to that studied by Gan et al. (2011) using Phase Doppler Anemometry (PDA), to ensure a good quantitative understanding of turbulent flow conditions along the column height. The experimental results showed that the meandering flow regime which exhibits high flow instability at the 0.3 m is more beneficial to fouling alleviation compared with the steady flow circulation regime at the 0.6 m. The filtration tests also confirmed the existence of an optimal superficial air velocity beyond which a further increase is of no significant benefit on the membrane fouling reduction. In addition, the alternate aeration provided by two air stones mounted at the opposite end of the diameter of the bubble column was also studied to investigate the associated flow dynamics and its influence on the membrane filtration performance. It was found that with a proper switching interval and membrane module orientation, the membrane fouling can be effectively controlled with even smaller superficial air velocity than the optimal value provided by a single air stone. Finally, the testing results with both inorganic and organic feeds showed that the solid particle composition and particle size distribution all contribute to the cake formation in a membrane filtration system. Copyright © 2013 Elsevier Ltd. All rights reserved.
FIELD TEST OF AIR SPARGING COUPLED WITH SOIL VAPOR EXTRACTION
A controlled field study was designed and conducted to assess the performance of air sparging for remediation of petroleum fuel and solvent contamination in a shallow (3-m deep) groundwater aquifer. Sparging was performed in an insolation test cell (5 m by 3 m by 8-m deep). A soi...
Pleasant, Saraya; O'Donnell, Amanda; Powell, Jon; Jain, Pradeep; Townsend, Timothy
2014-07-01
High concentrations of iron (Fe(II)) and manganese (Mn(II)) reductively dissolved from soil minerals have been detected in groundwater monitoring wells near many municipal solid waste landfills. Air sparging and vadose zone aeration (VZA) were evaluated as remedial approaches at a closed, unlined municipal solid waste landfill in Florida, USA. The goal of aeration was to oxidize Fe and Mn to their respective immobile forms. VZA and shallow air sparging using a partially submerged well screen were employed with limited success (Phase 1); decreases in dissolved iron were observed in three of nine monitoring wells during shallow air sparging and in two of 17 wells at VZA locations. During Phase 2, where deeper air sparging was employed, dissolved iron levels decreased in a significantly greater number of monitoring wells surrounding injection points, however no radial pattern was observed. Additionally, in wells affected positively by air sparging (mean total iron (FeTOT) <4.2mg/L, after commencement of air sparging), rising manganese concentrations were observed, indicating that the redox potential of the groundwater moved from an iron-reducing to a manganese-reducing environment. The mean FeTOT concentration observed in affected monitoring wells throughout the study was 1.40 mg/L compared to a background of 15.38 mg/L, while the mean Mn concentration was 0.60 mg/L compared to a background level of 0.27 mg/L. Reference wells located beyond the influence of air sparging areas showed little variation in FeTOT and Mn, indicating the observed effects were the result of air injection activities at study locations and not a natural phenomenon. Air sparging was found effective in intercepting plumes of dissolved Fe surrounding municipal landfills, but the effect on dissolved Mn was contrary to the desired outcome of decreased Mn groundwater concentrations. Copyright © 2014 Elsevier B.V. All rights reserved.
Kim, Juyoung; Kim, Heonki; Annable, Michael D
2015-01-01
Air injected into an aquifer during air sparging normally flows upward according to the pressure gradients and buoyancy, and the direction of air flow depends on the natural hydrogeologic setting. In this study, a new method for controlling air flow paths in the saturated zone during air sparging processes is presented. Two hydrodynamic parameters, viscosity and surface tension of the aqueous phase in the aquifer, were altered using appropriate water-soluble reagents distributed before initiating air sparging. Increased viscosity retarded the travel velocity of the air front during air sparging by modifying the viscosity ratio. Using a one-dimensional column packed with water-saturated sand, the velocity of air intrusion into the saturated region under a constant pressure gradient was inversely proportional to the viscosity of the aqueous solution. The air flow direction, and thus the air flux distribution was measured using gaseous flux meters placed at the sand surface during air sparging experiments using both two-, and three-dimensional physical models. Air flow was found to be influenced by the presence of an aqueous patch of high viscosity or suppressed surface tension in the aquifer. Air flow was selective through the low-surface tension (46.5 dyn/cm) region, whereas an aqueous patch of high viscosity (2.77 cP) was as an effective air flow barrier. Formation of a low-surface tension region in the target contaminated zone in the aquifer, before the air sparging process is inaugurated, may induce air flow through the target zone maximizing the contaminant removal efficiency of the injected air. In contrast, a region with high viscosity in the air sparging influence zone may minimize air flow through the region prohibiting the region from de-saturating. Copyright © 2014 Elsevier B.V. All rights reserved.
Improving the yield from fermentative hydrogen production.
Kraemer, Jeremy T; Bagley, David M
2007-05-01
Efforts to increase H(2) yields from fermentative H(2) production include heat treatment of the inoculum, dissolved gas removal, and varying the organic loading rate. Although heat treatment kills methanogens and selects for spore-forming bacteria, the available evidence indicates H(2) yields are not maximized compared to bromoethanesulfonate, iodopropane, or perchloric acid pre-treatments and spore-forming acetogens are not killed. Operational controls (low pH, short solids retention time) can replace heat treatment. Gas sparging increases H(2) yields compared to un-sparged reactors, but no relationship exists between the sparging rate and H(2) yield. Lower sparging rates may improve the H(2) yield with less energy input and product dilution. The reasons why sparging improves H(2) yields are unknown, but recent measurements of dissolved H(2) concentrations during sparging suggest the assumption of decreased inhibition of the H(2)-producing enzymes is unlikely. Significant disagreement exists over the effect of organic loading rate (OLR); some studies show relatively higher OLRs improve H(2) yield while others show the opposite. Discovering the reasons for higher H(2) yields during dissolved gas removal and changes in OLR will help improve H(2) yields.
Estimating the change of porosity in the saturated zone during air sparging.
Tsai, Yih-jin; Kuo, Yu-chia; Chen, Tsu-chi; Chou, Feng-chih
2006-01-01
Air sparging is a remedial method for groundwater. The remedial region is similar to the air flow region in the saturated zone. If soil particles are transported during air sparging, the porosity distributions in the saturated zone change, which may alter the flow path of the air. To understand better the particle movement, this study performed a sandbox test to estimate the soil porosity change during air sparging. A clear fracture was formed and the phenomenon of particle movement was observed when the air injection was started. The moved sand filled the porous around the fracture and the reparked sand filled the fracture, reducing the porosity around the fracture. The results obtained from the photographs of the sandbox, the current measurements and the direct sand sample measurements were close to each other and are credible. Therefore, air injection during air sparging causes sand particle movement of sand, altering the characteristic of the sand matrix and the air distribution.
2003-01-01
Aqueous Film Forming Foam ( AFFF ) Treatment Using Air-Sparged Hydrocyclone Technology January 2003 Report Documentation Page Form ApprovedOMB No. 0704...2003 to 00-00-2003 4. TITLE AND SUBTITLE Oil/Water Emulsion and Aqueous Film Forming Foam ( AFFF ) Treatment Using Air-Sparged Hydrocyclone Technology...ACRONYMS AFB Air Force Base AFFF Aqueous Film Forming Foam AFRL Air Force Research Laboratory ASH
Optimized inorganic carbon regime for enhanced growth and lipid accumulation in Chlorella vulgaris.
Lohman, Egan J; Gardner, Robert D; Pedersen, Todd; Peyton, Brent M; Cooksey, Keith E; Gerlach, Robin
2015-01-01
Large-scale algal biofuel production has been limited, among other factors, by the availability of inorganic carbon in the culture medium at concentrations higher than achievable with atmospheric CO2. Life cycle analyses have concluded that costs associated with supplying CO2 to algal cultures are significant contributors to the overall energy consumption. A two-phase optimal growth and lipid accumulation scenario is presented, which (1) enhances the growth rate and (2) the triacylglyceride (TAG) accumulation rate in the oleaginous Chlorophyte Chlorella vulgaris strain UTEX 395, by growing the organism in the presence of low concentrations of NaHCO3 (5 mM) and controlling the pH of the system with a periodic gas sparge of 5 % CO2 (v/v). Once cultures reached the desired cell densities, which can be "fine-tuned" based on initial nutrient concentrations, cultures were switched to a lipid accumulation metabolism through the addition of 50 mM NaHCO3. This two-phase approach increased the specific growth rate of C. vulgaris by 69 % compared to cultures sparged continuously with 5 % CO2 (v/v); further, biomass productivity (g L(-1) day(-1)) was increased by 27 %. Total biodiesel potential [assessed as total fatty acid methyl ester (FAME) produced] was increased from 53.3 to 61 % (FAME biomass(-1)) under the optimized conditions; biodiesel productivity (g FAME L(-1) day(-1)) was increased by 7.7 %. A bicarbonate salt screen revealed that American Chemical Society (ACS) and industrial grade NaHCO3 induced the highest TAG accumulation (% w/w), whereas Na2CO3 did not induce significant TAG accumulation. NH4HCO3 had a negative effect on cell health presumably due to ammonia toxicity. The raw, unrefined form of trona, NaHCO3∙Na2CO3 (sodium sesquicarbonate) induced TAG accumulation, albeit to a slightly lower extent than the more refined forms of sodium bicarbonate. The strategic addition of sodium bicarbonate was found to enhance growth and lipid accumulation rates in cultures of C. vulgaris, when compared to traditional culturing strategies, which rely on continuously sparging algal cultures with elevated concentrations of CO2(g). This work presents a two-phased, improved photoautotrophic growth and lipid accumulation approach, which may result in an overall increase in algal biofuel productivity.
CO2 Sparging Phase 3 Full Scale Implementation and Monitoring Report
In-situ carbon dioxide (CO2) sparging was designed and implemented to treat a subsurface causticbrine pool (CBP) formed as a result of releases from historical production of industrial chemicals at theLCP Chemicals Site, Brunswick, GA (Site).
CO2 Sparging Work Plan, LCP Chemicals
April 24, 2013 plan prepared by Mutch Associates, LLC for implementation of full-scale CO2 sparging of the subsurface caustic brine pool (CBP) at the LCP Chemicals site in Brunswick, Georgia. Region ID: 04 DocID: 10941341, DocDate: 04-24-2013
CO2 Sparging Proof of Concept Test Report, Revision 1, LCP Chemicals Site, Brunswick, Georgia
April 2013 report to evaluate the feasibility of CO2 sparging to remediate a sub-surface caustic brine pool (CBP) at the LCP Chemicals Superfund Site, GA. Region ID : 04, DocID: 10940639 , DocDate: 2013-04-01
PULSED AIR SPARGING IN AQUIFERS CONTAMINATED WITH DENSE NONAQUEOUS PHASE LIQUIDS
Air sparging was evaluated for remediation of tetrachloroethylene (PCE) present as dense nonaqueous phase liquid (DNAPL) in aquifers. A two-dimensional laboratory tank with a transparent front wall allowed for visual observation of DNAPL mobilization. A DNAPL zone 50 cm high was ...
Trzcinski, Antoine P; Stuckey, David C
2016-03-01
This paper focuses on the treatment of leachate from the organic fraction of municipal solid waste (OFMSW) in a submerged anaerobic membrane bioreactor (SAMBR). Operation of the SAMBR for this type of high strength wastewater was shown to be feasible at 5 days hydraulic retention time (HRT), 10 L min(-1) (LPM) biogas sparging rate and membrane fluxes in the range of 3-7 L m(-2) hr(-1) (LMH). Under these conditions, more than 90% COD removal was achieved during 4 months of operation without chemical cleaning the membrane. When the sparging rate was reduced to 2 LPM, the transmembrane pressure increased dramatically and the bulk soluble COD concentration increased due to a thicker fouling layer, while permeate soluble COD remained constant. Permeate soluble COD concentration increased by 20% when the sparging rate increased to 10 LPM. Copyright © 2015 Elsevier Ltd. All rights reserved.
Urum, Kingsley; Pekdemir, Turgay; Ross, David; Grigson, Steve
2005-07-01
This study investigated the removal of crude oil from soil using air sparging assisted stirred tank reactors. Two surfactants (rhamnolipid and sodium dodecyl sulfate, SDS) were tested and the effects of different parameters (i.e. temperature, surfactant concentrations, washing time, volume/mass ratio) were investigated under varying washing modes namely, stirring only, air sparging only and the combination of stirring and air sparging. The results showed that SDS removed more than 80% crude oil from non-weathered soil samples, whilst rhamnolipid showed similar oil removal at the third and fourth levels of the parameters tested. The oil removal ability of the seawater prepared solutions were better than those of the distilled water solutions at the first and second levels of temperature and concentration of surfactant solutions. This approach of soil washing was noted to be effective in reducing the amount of oil in soil. Therefore we suggested that a field scale test be conducted to assess the efficiency of these surfactants.
Peterson, Eric C; Daugulis, Andrew J
2014-03-01
Production of organic acids in solid-liquid two-phase partitioning bioreactors (TPPBs) is challenging, and highly pH-dependent, as cell growth occurs near neutral pH, while acid sorption occurs only at low pH conditions. CO2 sparging was used to achieve acidic pH swings, facilitating undissociated organic acid uptake without generating osmotic stress inherent in traditional acid/base pH control. A modified cultivation medium was formulated to permit greater pH reduction by CO2 sparging (pH 4.8) compared to typical media (pH 5.3), while still possessing adequate nutrients for extensive cell growth. In situ product recovery (ISPR) of butyric acid (pKa = 4.8) produced by Clostridium tyrobutyricum was achieved through intermittent CO2 sparging while recycling reactor contents through a column packed with absorptive polymer Hytrel® 3078. This polymer was selected on the basis of its composition as a polyether copolymer, and the use of solubility parameters for predicting solute polymer affinity, and was found to have a partition coefficient for butyric acid of 3. Total polymeric extraction of 3.2 g butyric acid with no CO2 mediated pH swings was increased to 4.5 g via CO2 -facilitated pH shifting, despite the buffering capacity of butyric acid, which resists pH shifting. This work shows that CO2 -mediated pH swings have an observable positive effect on organic acid extraction, with improvements well over 150% under optimal conditions in early stage fermentation compared to CO2 -free controls, and this technique can be applied other organic acid fermentations to achieve or improve ISPR. © 2013 Wiley Periodicals, Inc.
GREEN AND SUSTAINABLE REMEDIATION BEST MANAGEMENT PRACTICES
2016-09-07
adoption. The technologies covered include air sparging, biosparging, soil vapor extraction (SVE), enhanced reductive dechlorination (ERD), in situ...RPM Remedial Project Manager SCR selective catalytic reduction SEE steam enhanced extraction SVE soil vapor extraction TCE trichloroethene...further promote their adoption. The technologies covered include air sparging, biosparging, soil vapor extraction (SVE), enhanced reductive
Final Work Plan for CO2 Sparging Proof of Concept Test, LCP Chemical Site
September 11, 2012 plan to address concerns on a pilot test of carbon dioxide sparging to neutralize pH and reduce the density of the Caustic Brine Pool (CBP) at the LCP Chemicals Superfund Site, GA. Region ID: 04 DocID: 10903388, DocDate: 09-11-2012
In situ air sparging (IAS) has been proposed and installed at an increasing number of sites to address contamination in both the saturated and unsaturated zones. Because of the lack of experimental and substantive performance data, however, the actual performance and effectivene...
The efficacy of soil vacuum extraction or air sparging and soil vacuum extraction for remediation of ground water contamination with MTBE was compared to remediation of contamination with benzene. There was no practical difference.
Sites were identified that met the followin...
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-01-01
This decision document presents the selected remedy for surficial groundwater for a portion of Operable Unit (OU) No. 10 (Site 35), Marine Corps Base (MCB), Camp Lejeune, North Carolina. Five Remedial Action Alternatives (RAAs) were evaluated as part of an interim remedial investigation/feasibility study for surficial groundwater at OU No. 10 (Site 35). These RAAs included RAA 1 (No Action), RAA 2 (No Action With Institutional Controls), RAA 3 (Groundwater Collection and On-site Treatment), RAA 4 (In Situ Air Sparging and Off-Gas Carbon Adsorption) and RAA 5 (In Well Aeration and Off-Gas Adsorption). After all five RAAs were compared tomore » established criteria, RAA 5 was selected as the preferred alternative.« less
Li, Na; Hu, Yi; Lu, Yong-Ze; Zeng, Raymond J; Sheng, Guo-Ping
2016-07-01
In the recent years, anaerobic membrane bioreactor (AnMBR) technology is being considered as a very attractive alternative for wastewater treatment due to the striking advantages such as upgraded effluent quality. However, fouling control is still a problem for the application of AnMBR. This study investigated the performance of an AnMBR using mesh filter as support material to treat low-strength wastewater via in-situ biogas sparging. It was found that mesh AnMBR exhibited high and stable chemical oxygen demand (COD) removal efficiencies with values of 95 ± 5 % and an average methane yield of 0.24 L CH4/g CODremoved. Variation of transmembrane pressure (TMP) during operation indicated that mesh fouling was mitigated by in-situ biogas sparging and the fouling rate was comparable to that of aerobic membrane bioreactor with mesh filter reported in previous researches. The fouling layer formed on the mesh exhibited non-uniform structure; the porosity became larger from bottom layer to top layer. Biogas sparging could not change the composition but make thinner thickness of cake layer, which might be benefit for reducing membrane fouling rate. It was also found that ultrasonic cleaning of fouled mesh was able to remove most foulants on the surface or pores. This study demonstrated that in-situ biogas sparging enhanced the performance of AnMBRs with mesh filter in low-strength wastewater treatment. Apparently, AnMBRs with mesh filter can be used as a promising and sustainable technology for wastewater treatment.
Groundwater remediation engineering sparging using acetylene--study on the flow distribution of air.
Zheng, Yan-Mei; Zhang, Ying; Huang, Guo-Qiang; Jiang, Bin; Li, Xin-Gang
2005-01-01
Air sparging (AS) is an emerging method to remove VOCs from saturated soils and groundwater. Air sparging performance highly depends on the air distribution resulting in the aquifer. In order to study gas flow characterization, a two-dimensional experimental chamber was designed and installed. In addition, the method by using acetylene as the tracer to directly image the gas distribution results of AS process has been put forward. Experiments were performed with different injected gas flow rates. The gas flow patterns were found to depend significantly on the injected gas flow rate, and the characterization of gas flow distributions in porous media was very different from the acetylene tracing study. Lower and higher gas flow rates generally yield more irregular in shape and less effective gas distributions.
Field-Scale Evaluation of Monitored Natural Attenuation for Dissolved Chlorinated Solvent Plumes
2009-04-01
biological in-situ treatment, an air sparging pilot study, and a phytoremediation study. The innovative technology studies were conducted within the source... phytoremediation (June to September 1997), reductive anaerobic biological in-situ treatment technology (RABITT; 1998), and groundwater recirculation wells...u g / L ) Measured Concentrations in 1381MWS09 Air Sparge Pilot Test (1996/1997) Phytoremediation Pilot Test (1997) RABITT Pilot Test (1998
Remediation by Natural Attenuation Treatability Study for Operable Unit 5
1997-12-01
remaining as a result of all attenuation processes is equivalent to the fraction of contaminant remaining as a result of non - destructive attenuation...Alternative 1-- RNA Combined with LTM, Institutional Controls , Air Sparging Along Main Street, and Groundwater Extraction and Treatment Near Well Pair MW137...MW138 .............................. 6-4 6.3.2 Alternative 2 -- RNA, LTM, Institutional Controls , Air Sparging along Main Street, Groundwater
Michaels, J D; Mallik, A K; Papoutsakis, E T
1996-08-20
It has been established that the forces resulting from bubbles rupturing at the free air (gas)/liquid surface injure animal cells in agitated and/or sparged bioreactors. Although it has been suggested that bubble coalescence and breakup within agitated and sparged bioreactors (i.e., away from the free liquid surface) can be a source of cell injury as well, the evidence has been indirect. We have carried out experiments to examine this issue. The free air/liquid surface in a sparged and agitated bioractor was eliminated by completely filling the 2-L reactor and allowing sparged bubbles to escape through an outlet tube. Two identical bioreactors were run in parallel to make comparisons between cultures that were oxygenated via direct air sparging and the control culture in which silicone tubing was used for bubble-free oxygenation. Thus, cell damage from cell-to-bubble interactions due to processes (bubble coalescence and breakup) occurring in the bulk liquid could be isolated by eliminating damage due to bubbles rupturing at the free air/liquid surface of the bioreactor. We found that Chinese hamster ovary (CHO) cells grown in medium that does not contain shear-protecting additives can be agitated at rates up to 600 rpm without being damaged extensively by cell-to bubble interactions in the bulk of the bioreactor. We verified this using both batch and high-density perfusion cultures. We tested two impeller designs (pitched blade and Rushton) and found them not to affect cell damage under similar operational conditions. Sparger location (above vs. below the impeller) had no effect on cell damage at higher agitation rates but may affect the injury process at lower agitation intensities (here, below 250 rpm). In the absence of a headspace, we found less cell damage at higher agitation intensities (400 and 600 rpm), and we suggest that this nonintuitive finding derives from the important effect of bubble size and foam stability on the cell damage process. (c) 1996 John Wiley & Sons, Inc.
Design and use of a sparged platform for energy flux measurements over lakes
NASA Astrophysics Data System (ADS)
Gijsbers, S.; Wenker, K.; van Emmerik, T.; de Jong, S.; Annor, F.; Van De Giesen, N.
2012-12-01
Energy flux measurements over lakes or reservoirs demand relatively stable platforms. Platforms can not be stabilized by fixing them on the bottom of the lake when the water body is too deep or when water levels show significant fluctuations. We present the design and first operational results of a sparged platform. The structure consists of a long PVC pipe, the sparge, which is closed at the bottom. On the PVC pipe rests an aluminum frame platform that carries instrumentation and solar power panel. In turn, the platform rests partially on a large inflated tire. At the bottom of the PVC pipe, lead weights and batteries were placed to ensure a very low point of gravity to minimize wave impact on the platform movement. The tire ensures a large second moment of the water plane. The overall volume of displacement is small in this sparged design. The combination of large second momentum of the water plane and small displacement ensure a high placement of the metacenter. The distance between the point of gravity and the metacenter is relatively long and the weight is large due to the weights and batteries. This ensures that the eigenfrequency of the platform is very low. The instrumentation load consisted of a WindMaster Pro (sonic anemometer for 3D wind speed and air temperature to perform eddy covariance measurements of sensible heat flux), a NR Lite (net radiometer), and air temperature and relative humidity sensors. The platform had a wind vane and the sparge could turn freely around its anchor cable to ensure that the anemometer always faced upwind. A compass in the logger completed this setup. The stability was measured with an accelerometer. In addition to the design and its stability, some first energy flux results will be presented.
2002-08-12
treatment zone increases with increasing separation. It is important to ensure a good annular air flow seal between the top of the screened interval and... seals are critical to successful air sparging operation. In their absence, the injected air will flow up along the well bore and the well will be...glass beads and model homogenous and heterogeneous subsurface hydrogeologic settings were simulated . The goal of the study was to observe how the
Chao, Huan-Ping; Hsieh, Lin-Han Chiang; Tran, Hai Nguyen
2018-02-15
This study developed a novel method to promote the remediation efficiency of air sparging. According to the enhanced-volatilization theory presented in this study, selected alcohols added to groundwater can highly enhance the volatilization amounts of organic compounds with high Henry's law constants. In this study, the target organic compounds consisted of n-hexane, n-heptane, benzene, toluene, 1,1,2-trichloroethane, and tetrachloroethene. n-pentanol, n-hexanol, and n-heptanol were used to examine the changes in the volatilization amounts of organic compounds in the given period. Two types of soils with high and low organic matter were applied to evaluate the transport of organic compounds in the soil-water system. The volatilization amounts of the organic compounds increased with increasing alcohol concentrations. The volatilization amounts of the test organic compounds exhibited a decreasing order: n-heptanol>n-hexanol>n-pentanol. When 10mg/L n-heptanol was added to the system, the maximum volatilization enhancement rate was 18-fold higher than that in distilled water. Samples of soil with high organic matter might reduce the volatilization amounts by a factor of 5-10. In the present study, the optimal removal efficiency for aromatic compounds was approximately 98%. Copyright © 2017 Elsevier B.V. All rights reserved.
Potential for Biodegradation of the Alkaline Hydrolysis End Products of TNT and RDX
2007-11-01
Bellco Glass, Inc. (Vineland, NJ). The stainless steel , deflected point needles used in sparging (18 G, 6 in. and 12 in.) were manufactured by Popper and...12 Figure 4. Gas sparging of anaerobic cultures showing the direction of flow of the CO2- free carrier gas through the sample...determine if any reaction components exhibited unpaired electron spins, which would indicate a free radical. EPR results suggested that a single
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, J.D.; Yi, Y.; Gopalakrishnan, S.
1993-12-31
Previous plant testing had been limited to the processing of minus 100 mesh classifier overflow (Upper Freeport Coal {approximately} 20% ash) with the 6-inch air-sparged hydrocyclone (ASH-6C) as reported at Coal Prep 92. The ASH-6C unit was found to provide separation efficiencies equivalent, or superior, to separations with the ASH-2C system. During the summer of 1992 the construction of the first 15-inch air-sparged hydrocyclone prototype was completed by the Advanced Processing Technologies, Inc. Installation at the Homer City Coal Preparation Plant was accomplished and testing began in October 1992. The ASH-15C unit can operate at a flowrate as high asmore » 1,000 gpm. Experimental results are reported with respect to capacity, combustible recovery and clean coal quality.« less
Kabelitz, Nadja; Machackova, Jirina; Imfeld, Gwenaël; Brennerova, Maria; Pieper, Dietmar H; Heipieper, Hermann J; Junca, Howard
2009-03-01
In order to obtain insights in complexity shifts taking place in natural microbial communities under strong selective pressure, soils from a former air force base in the Czech Republic, highly contaminated with jet fuel and at different stages of a bioremediation air sparging treatment, were analyzed. By tracking phospholipid fatty acids and 16S rRNA genes, a detailed monitoring of the changes in quantities and composition of the microbial communities developed at different stages of the bioventing treatment progress was performed. Depending on the length of the air sparging treatment that led to a significant reduction in the contamination level, we observed a clear shift in the soil microbial community being dominated by Pseudomonads under the harsh conditions of high aromatic contamination to a status of low aromatic concentrations, increased biomass content, and a complex composition with diverse bacterial taxonomical branches.
In situ treatment of arsenic-contaminated groundwater by air sparging.
Brunsting, Joseph H; McBean, Edward A
2014-04-01
Arsenic contamination of groundwater is a major problem in some areas of the world, particularly in West Bengal (India) and Bangladesh where it is caused by reducing conditions in the aquifer. In situ treatment, if it can be proven as operationally feasible, has the potential to capture some advantages over other treatment methods by being fairly simple, not using chemicals, and not necessitating disposal of arsenic-rich wastes. In this study, the potential for in situ treatment by injection of compressed air directly into the aquifer (i.e. air sparging) is assessed. An experimental apparatus was constructed to simulate conditions of arsenic-rich groundwater under anaerobic conditions, and in situ treatment by air sparging was employed. Arsenic (up to 200 μg/L) was removed to a maximum of 79% (at a local point in the apparatus) using a solution with dissolved iron and arsenic only. A static "jar" test revealed arsenic removal by co-precipitation with iron at a molar ratio of approximately 2 (iron/arsenic). This is encouraging since groundwater with relatively high amounts of dissolved iron (as compared to arsenic) therefore has a large theoretical treatment capacity for arsenic. Iron oxidation was significantly retarded at pH values below neutral. In terms of operation, analysis of experimental results shows that periodic air sparging may be feasible. Copyright © 2014 Elsevier B.V. All rights reserved.
1997-10-01
and xylene (BTEX) in the shallow groundwater system at the site. Dissolved chlorinated aliphatic hydrocarbons (CAHs) also are present in the shallow...micrograms per liter (gg/L)], RNA with LTM I should be used to complement the ROD-mandated bioventing and air sparging systems . 0 When bioventing and...The ROD identifies benzene as the primary contaminant of concern (COC) for FT-i and specifies the use of air sparging in the remediation system
Comparison of ultrasonic distillation to sparging of liquid mixtures
NASA Astrophysics Data System (ADS)
Park, Han Jung; Jung, Hye Yun; Calo, Joseph; Diebold, Gerald
2011-04-01
The application of intense ultrasound to a liquid-gas interface results in the formation of an ultrasonic fountain and generates both mist and vapor from the liquid. Here, the composition of the vapor and aerosol above an ultrasonic fountain is determined as a function of irradiation time and compared with the results of sparging for five different solutions. The experimental apparatus for determining the efficiency of separation consists of a glass vessel containing a piezoelectric transducer driven at either 1.65 or 2.40 MHz. Dry nitrogen is passed over the ultrasonic fountain to remove the vapor and aerosol. The compositions of the liquid solutions are recorded as a function of irradiation time using gas chromatography, refractive index measurement, nuclear magnetic resonance, or spectrophotometry. Data are presented for ethanol-water and ethyl acetate-ethanol solutions, cobalt chloride in water, colloidal silica, and colloidal gold. The experiments show that ultrasonic distillation produces separations that are somewhat less complete than what is obtained using sparging.
Pulsed-Plasma Disinfection of Water Containing Escherichia coli
NASA Astrophysics Data System (ADS)
Satoh, Kohki; MacGregor, Scott J.; Anderson, John G.; Woolsey, Gerry A.; Fouracre, R. Anthony
2007-03-01
The disinfection of water containing the microorganism, Escherichia coli (E. coli) by exposure to a pulsed-discharge plasma generated above the water using a multineedle electrode (plasma-exposure treatment), and by sparging the off-gas of the pulsed plasma into the water (off-gas-sparging treatment), is performed in the ambient gases of air, oxygen, and nitrogen. For the off-gas-sparging treatment, bactericidal action is observed only when oxygen is used as the ambient gas, and ozone is found to generate the bactericidal action. For the plasma-exposure treatment, the density of E. coli bacteria decreases exponentially with plasma-exposure time for all the ambient gases. It may be concluded that the main contributors to E. coli inactivation are particle species produced by the pulsed plasma. For the ambient gases of air and nitrogen, the influence of acidification of the water in the system, as a result of pulsed-plasma exposure, may also contribute to the decay of E. coli density.
Xiaochao, Gu; Jin, Tian; Xiaoyun, Li; Bin, Zhou; Xujing, Zheng; Jin, Xu
2018-01-01
The three-dimensional electro-Fenton method was used in the folic acid wastewater pretreatment process. In this study, we researched the degradation of folic acid and the effects of different parameters such as the air sparging rate, current density, pH and reaction time on chemical oxygen demand (COD) removal in folic acid wastewater. A four-level and four-factor orthogonal test was designed and optimal reaction conditions to pretreat folic acid wastewater by three-dimensional electrode were determined: air sparge rate 0.75 l min−1, current density 10.26 mA cm−2, pH 5 and reaction time 90 min. Under these conditions, the removal of COD reached 94.87%. LC-MS results showed that the electro-Fenton method led to an initial folic acid decomposition into p-aminobenzoyl-glutamic acid (PGA) and xanthopterin (XA); then part of the XA was oxidized to pterine-6-carboxylic acid (PCA) and the remaining part of XA was converted to pterin and carbon dioxide. The kinetics analysis of the folic acid degradation process during pretreatment was carried out by using simulated folic acid wastewater, and it could be proved that the degradation of folic acid by using the three-dimensional electro-Fenton method was a second-order reaction process. This study provided a reference for industrial folic acid treatment. PMID:29410807
Robles, A; Ruano, M V; Ribes, J; Ferrer, J
2013-03-01
A demonstration plant with two commercial HF ultrafiltration membrane modules (PURON(®), Koch Membrane Systems, PUR-PSH31) was operated with urban wastewater. The effect of the main operating variables on membrane performance at sub-critical and supra-critical filtration conditions was tested. The physical operating variables that affected membrane performance most were gas sparging intensity and back-flush (BF) frequency. Indeed, low gas sparging intensities (around 0.23 Nm(3) h(-1) m(-2)) and low BF frequencies (30-s back-flush for every 10 basic filtration-relaxation cycles) were enough to enable membranes to be operated sub-critically even when levels of mixed liquor total solids were high (up to 25 g L(-1)). On the other hand, significant gas sparging intensities and BF frequencies were required in order to maintain long-term operating at supra-critical filtration conditions. After operating for more than two years at sub-critical conditions (transmembrane flux between 9 and 13.3 LMH at gas sparging intensities of around 0.23 Nm(3) h(-1) m(-2) and MLTS levels from around 10-30 g L(-1)) no significant irreversible/irrecoverable fouling problems were detected (membrane permeability remained above 100 LMH bar(-1) and total filtration resistance remained below 10(13) m(-1)), therefore no chemical cleaning was conducted. Membrane performance was similar to the aerobic HF membranes operated in full-scale MBR plants. Copyright © 2012 Elsevier Ltd. All rights reserved.
Operation of passive membrane systems for drinking water treatment.
Oka, P A; Khadem, N; Bérubé, P R
2017-05-15
The widespread adoption of submerged hollow fibre ultrafiltration (UF) for drinking water treatment is currently hindered by the complexity and cost of these membrane systems, especially in small/remote communities. Most of the complexity is associated with auxiliary fouling control measures, which include backwashing, air sparging and chemical cleaning. Recent studies have demonstrated that sustained operation without fouling control measures is possible, but little is known regarding the conditions under which extended operation can be sustained with minimal to no fouling control measures. The present study investigated the contribution of different auxiliary fouling control measures to the permeability that can be sustained, with the intent of minimizing the mechanical and operational complexity of submerged hollow fiber UF membrane systems while maximizing their throughput capacity. Sustained conditions could be achieved without backwashing, air sparging or chemical cleaning (i.e. passive operation), indicating that these fouling control measures can be eliminated, substantially simplifying the mechanical and operational complexity of submerged hollow fiber UF systems. The adoption of hydrostatic pressure (i.e. gravity) to provide the driving force for permeation further reduced the system complexity. Approximately 50% of the organic material in the raw water was removed during treatment. The sustained passive operation and effective removal of organic material was likely due to the microbial community that established itself on the membrane surface. The permeability that could be sustained was however only approximately 20% of that which can be maintained with fouling control measures. Retaining a small amount of air sparging (i.e. a few minutes daily) and incorporating a daily 1-h relaxation (i.e. permeate flux interruption) period prior to sparging more than doubled the permeability that could be sustained. Neither the approach used to interrupt the permeate flux nor that developed to draw air into the system for sparging using gravity add substantial mechanical or operational complexity to the system. The high throughput capacity that can be sustained by eliminating all but a couple of simple fouling control measures make passive membrane systems ideally suited to provide high quality water especially where access to financial resources, technical expertise and/or electrical power is limited. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ben Neriah, Asaf; Paster, Amir
2017-10-01
Application of short-duration pulses of high air pressure, to an air sparging system for groundwater remediation, was tested in a two-dimensional laboratory setup. It was hypothesized that this injection mode, termed boxcar, can enhance the remediation efficiency due to the larger ZOI and enhanced mixing which results from the pressure pulses. To test this hypothesis, flow and transport experiments were performed. Results confirm that cyclically applying short-duration pressure pulses may enhance contaminant cleanup. Comparing the boxcar to conventional continuous air-injection shows up to a three-fold increase in the single well radius of influence, dependent on the intensity of the short-duration pressure-pulses. The cleanup efficiency of Toluene from the water was 95% higher than that achieved under continuous injection with the same average conditions. This improvement was attributed to the larger zone of influence and higher average air permeability achieved in the boxcar mode, relative to continuous sparging. Mixing enhancement resultant from recurring pressure pulses was suggested as one of the mechanisms which enhance the contaminant cleanup. The application of a boxcar mode in an existing, multiwell, air sparging setup can be relatively straightforward: it requires the installation of an on-off valve in each of the injection-wells and a central control system. Then, turning off some of the wells, for a short-duration, result in a stepwise increase in injection pressure in the rest of the wells. It is hoped that this work will stimulate the additional required research and ultimately a field scale application of this new injection mode.
Domestic wastewater treatment by a submerged MBR (membrane bio-reactor) with enhanced air sparging.
Chang, I S; Judd, S J
2003-01-01
The air sparging technique has been recognised as an effective way to control membrane fouling. However, its application to a submerged MBR (Membrane Bio-Reactor) has not yet been reported. This paper deals with the performances of air sparging on a submerged MBR for wastewater treatment. Two kinds of air sparging techniques were used respectively. First, air is injected into the membrane tube channels so that mixed liquor can circulate in the bioreactor (air-lift mode). Second, a periodic air-jet into the membrane tube is introduced (air-jet mode). Their applicability was evaluated with a series of lab-scale experiments using domestic wastewater. The flux increased from 23 to 33 l m(-2) h(-1) (43% enhancement) when air was injected for the air-lift module. But further increase of flux was not observed as the gas flow increased. The Rc/(Rc+Rf), ratio of cake resistance (Rc) to sum of Rc and Rf (internal fouling resistance), was 23%, indicating that the Rc is not the predominant resistance unlike other MBR studies. It showed that the cake layer was removed sufficiently due to the air injection. Thus, an increase of airflow could not affect the flux performance. The air-jet module suffered from a clogging problem with accumulated sludge inside the lumen. Because the air-jet module has characteristics of dead end filtration, a periodic air-jet was not enough to blast all the accumulated sludge out. But flux was greater than in the air-lift module if the clogging was prevented by an appropriate cleaning regime such as periodical backwashing.
Dissolver vessel bottom assembly
Kilian, Douglas C.
1976-01-01
An improved bottom assembly is provided for a nuclear reactor fuel reprocessing dissolver vessel wherein fuel elements are dissolved as the initial step in recovering fissile material from spent fuel rods. A shock-absorbing crash plate with a convex upper surface is disposed at the bottom of the dissolver vessel so as to provide an annular space between the crash plate and the dissolver vessel wall. A sparging ring is disposed within the annular space to enable a fluid discharged from the sparging ring to agitate the solids which deposit on the bottom of the dissolver vessel and accumulate in the annular space. An inlet tangential to the annular space permits a fluid pumped into the annular space through the inlet to flush these solids from the dissolver vessel through tangential outlets oppositely facing the inlet. The sparging ring is protected against damage from the impact of fuel elements being charged to the dissolver vessel by making the crash plate of such a diameter that the width of the annular space between the crash plate and the vessel wall is less than the diameter of the fuel elements.
Formation of inorganic nitrogenous byproducts in aqueous solution under ultrasound irradiation.
Yao, Juanjuan; Chen, Longfu; Chen, Xiangyu; Zhou, Lingxi; Liu, Wei; Zhang, Zhi
2018-04-01
The effects of ultrasonic frequency, power intensity, temperature and sparged gas on the generation of nitrogenous by-products NO 2 - and NO 3 - have been investigated, and the new kinetics model of NO 2 - and NO 3 - generation was also explored. The results show that the highest primary generation rate of NO 2 - and NO 3 - by direct sonolysis in the cavitation bubbles (represented by k 1 ' and k 2 ', respectively) was obtained at 600 kHz and 200 kHz, respectively, in the applied ultrasonic frequency range of 200 to 800 kHz. The primary generation rate of NO 2 - (represented by k 1 ') increased with the increasing ultrasonic intensity while the primary generation rate of NO 3 - (represented by k 2 ') decreased. The lower temperature is beneficial to the primary generation of both NO 2 - and NO 3 - in the cavitation bubbles. The optimal overall yields of both NO 2 - and NO 3 - were obtained at the N 2 /O 2 volume (in the sparged gas) ratio of 3:1 which is near to the ratio of N 2 /O 2 in air. The dissolved O 2 is the dominant oxygen element source for both NO and NO 2 , compared with water vapor. Ultrasonic irradiation can significant enhance the recovery rates of dissolved N 2 and O 2 and thus keep the N 2 fixation reaction going even without aeration. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Disselkamp, Robert S.; Harris, Benjamin D.; Hart, Todd R.
2008-07-20
The production of polyol chemicals is of increasing interest as they are obtained from the catalytic processing of biological feedstock materials, which also is becoming more prevalent. A case in point is glycerol production, formed as a byproduct in biodiesel catalytic processing. Here we report the reaction of a simple 1,2-diol, propylene glycol, with hydrogen peroxide and a Pd-black catalyst under reflux conditions at 368 K. The experiments were performed by either co-addition of hydrogen peroxide with air sparging, or addition of hydrogen peroxide alone, each yielding hydroxy acetone (HA) and acetic acid (AA) products, with a lesser amount ofmore » lactic acid (LA) formed. Product conversion data at near neutral pH versus hydrogen peroxide equivalents added relative to substrate is presented. Hydrogen peroxide addition without air sparging at 5 equivalents resulted in 65% conversion with an HA:AA molar ratio of 2:1. Conversely, hydrogen peroxide addition with air sparging at only 0.75 equivalents resulted in 40% conversion with an HA:AA ratio of 3:1. From this it is concluded that although the product distribution in these chemistries is somewhat unchanged by air sparging, it is surprising that the amount of reactive oxygen is greatly enhanced with co-addition of O2/H2O2. Additional studies have revealed the amount of LA formed can be enhanced under acidic conditions (pH=1.5 compared to pH=8.5), such that 26% of total product formation is LA. Since hydrogen peroxide is an environmentally clean reagent and becoming more cost effective to use, this work may guide future applied investigations into polyol chemical syntheses.« less
Ben Neriah, Asaf; Paster, Amir
2017-10-01
Application of short-duration pulses of high air pressure, to an air sparging system for groundwater remediation, was tested in a two-dimensional laboratory setup. It was hypothesized that this injection mode, termed boxcar, can enhance the remediation efficiency due to the larger ZOI and enhanced mixing which results from the pressure pulses. To test this hypothesis, flow and transport experiments were performed. Results confirm that cyclically applying short-duration pressure pulses may enhance contaminant cleanup. Comparing the boxcar to conventional continuous air-injection shows up to a three-fold increase in the single well radius of influence, dependent on the intensity of the short-duration pressure-pulses. The cleanup efficiency of Toluene from the water was 95% higher than that achieved under continuous injection with the same average conditions. This improvement was attributed to the larger zone of influence and higher average air permeability achieved in the boxcar mode, relative to continuous sparging. Mixing enhancement resultant from recurring pressure pulses was suggested as one of the mechanisms which enhance the contaminant cleanup. The application of a boxcar mode in an existing, multiwell, air sparging setup can be relatively straightforward: it requires the installation of an on-off valve in each of the injection-wells and a central control system. Then, turning off some of the wells, for a short-duration, result in a stepwise increase in injection pressure in the rest of the wells. It is hoped that this work will stimulate the additional required research and ultimately a field scale application of this new injection mode. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluation of modified boehm titration methods for use with biochars.
Fidel, Rivka B; Laird, David A; Thompson, Michael L
2013-11-01
The Boehm titration, originally developed to quantify organic functional groups of carbon blacks and activated carbons in discrete pK ranges, has received growing attention for analyzing biochar. However, properties that distinguish biochar from carbon black and activated carbon, including greater carbon solubility and higher ash content, may render the original Boehm titration method unreliable for use with biochars. Here we use seven biochars and one reference carbon black to evaluate three Boehm titration methods that use (i) acidification followed by sparging (sparge method), (ii) centrifugation after treatment with BaCl (barium method), and (iii) a solid-phase extraction cartridge followed by acidification and sparging (cartridge method) to remove carbonates and dissolved organic compounds (DOC) from the Boehm extracts before titration. Our results for the various combinations of Boehm reactants and methods indicate that no one method was free of bias for all three Boehm reactants and that the cartridge method showed evidence of bias for all pK ranges. By process of elimination, we found that a combination of the sparge method for quantifying functional groups in the lowest pK range (∼5 to 6.4), and the barium method for quantifying functional groups in the higher pK ranges (∼6.4 to 10.3 and ∼10.3 to 13) to be free of evidence for bias. We caution, however, that further testing is needed and that all Boehm titration results for biochars should be considered suspect unless efforts were undertaken to remove ash and prevent interference from DOC. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
NASA Astrophysics Data System (ADS)
Panday, S.; Wu, Y. S.; Huyakorn, P. S.; Springer, E. P.
1994-06-01
This paper discusses the verification and application of the three-dimensional (3-D) multiphase flow model presented by Huyakorn et al. (Part 1 in this issue) for assessing contamination due to subsurface releases of non-aqueous-phase liquids (NAPL's). Attention is focussed on situations involving one-, two- and three-dimensional flow through porous media. The model formulations and numerical schemes are tested for highly nonlinear field conditions. The utility and accuracy of various simplifications to certain simulation scenarios are assessed. Five simulation examples are included for demonstrative purposes. The first example verifies the model for vertical flow and compares the performance of the fully three-phase and the passive-air-phase formulations. Air-phase boundary conditions are noted to have considerable effects on simulation results. The second example verifies the model for cross-sectional analyses involving LNAPL and DNAPL migration. Finite-difference (5-point) and finite-element (9-point) spatial approximations are compared for different grid aspect ratios. Unless corrected, negative-transmissivity conditions were found to have undesirable impact on the finite-element solutions. The third example provides a model validation against laboratory experimental data on 5-spot water-flood treatment of oil reservoirs. The sensitivity to grid orientation is noted for the finite-difference schemes. The fourth example demonstrates model utility in characterizing the 3-D migration of LNAPL and DNAPL from surface sources. The final example present a modeling study of air sparging. Critical parameters affecting the performance of air-sparging system are examined. In general, the modeling results indicate sparging is more effective in water-retentive soils, and larger values of sparge influence radius may be achieved for certain anisotropic conditions.
Growth Kinetics for Microalgae Grown in Palm Oil Mill Effluent (POME) medium at various CO2 Levels
NASA Astrophysics Data System (ADS)
Razali, S.; Salihon, J.; Ahmad, M. A.
2018-05-01
This paper sought to find the growth kinetic data of maximum specific growth rate (μmax) and substrate saturation constant (KS) for a microalgal reaction system over various dissolved CO2 levels (0.04, 0.1, 0.3, 0.5, 0.8, 1.0, 5.0, 10.0% v/v) at a constant sparging rate of 1.2 vvm, by using logistic model and Monod kinetics. The reaction system consisted of microalgae growing in palm oil mill effluent (POME) medium in 1 L flask with constant light illumination and sparged with the specified CO2 gas mixture. It is found from the experimental works that the values of μmax and KS to be at 0.04958 h-1 and 0.03523% (v/v) respectively. The results also showed that utilizing CO2 levels (v/v) in the sparging gas mixture more than 1% (v/v) would not improve microalgae growth significantly as expressed in the values of specific growth rate µ. These data and information are critically important for bioreactor scaling up purposes, especially bioreactor system dedicated for microalgae products and CO2 sequestration.
Effect of groundwater flow on remediation of dissolved-phase VOC contamination using air sparging.
Reddy, K R; Adams, J A
2000-02-25
This paper presents two-dimensional laboratory experiments performed to study how groundwater flow may affect the injected air zone of influence and remedial performance, and how injected air may alter subsurface groundwater flow and contaminant migration during in situ air sparging. Tests were performed by subjecting uniform sand profiles contaminated with dissolved-phase benzene to a hydraulic gradient and two different air flow rates. The results of the tests were compared to a test subjected to a similar air flow rate but a static groundwater condition. The test results revealed that the size and shape of the zone of influence were negligibly affected by groundwater flow, and as a result, similar rates of contaminant removal were realized within the zone of influence with and without groundwater flow. The air flow, however, reduced the hydraulic conductivity within the zone of influence, reducing groundwater flow and subsequent downgradient contaminant migration. The use of a higher air flow rate further reduced the hydraulic conductivity and decreased groundwater flow and contaminant migration. Overall, this study demonstrated that air sparging may be effectively implemented to intercept and treat a migrating contaminant plume.
Díaz, I; Pérez, C; Alfaro, N; Fdz-Polanco, F
2015-06-01
In this study, the potential of a pilot hollow-fiber membrane bioreactor for the conversion of H2 and CO2 to CH4 was evaluated. The system transformed 95% of H2 and CO2 fed at a maximum loading rate of 40.2 [Formula: see text] and produced 0.22m(3) of CH4 per m(3) of H2 fed at thermophilic conditions. H2 mass transfer to the liquid phase was identified as the limiting step for the conversion, and kLa values of 430h(-1) were reached in the bioreactor by sparging gas through the membrane module. A simulation showed that the bioreactor could upgrade biogas at a rate of 25m(3)/mR(3)d, increasing the CH4 concentration from 60 to 95%v. This proof-of-concept study verified that gas sparging through a membrane module can efficiently transfer H2 from gas to liquid phase and that the conversion of H2 and CO2 to biomethane is feasible on a pilot scale at noteworthy load rates. Copyright © 2015 Elsevier Ltd. All rights reserved.
Instrumentation, control, and automation for submerged anaerobic membrane bioreactors.
Robles, Ángel; Durán, Freddy; Ruano, María Victoria; Ribes, Josep; Rosado, Alfredo; Seco, Aurora; Ferrer, José
2015-01-01
A submerged anaerobic membrane bioreactor (AnMBR) demonstration plant with two commercial hollow-fibre ultrafiltration systems (PURON®, Koch Membrane Systems, PUR-PSH31) was designed and operated for urban wastewater treatment. An instrumentation, control, and automation (ICA) system was designed and implemented for proper process performance. Several single-input-single-output (SISO) feedback control loops based on conventional on-off and PID algorithms were implemented to control the following operating variables: flow-rates (influent, permeate, sludge recycling and wasting, and recycled biogas through both reactor and membrane tanks), sludge wasting volume, temperature, transmembrane pressure, and gas sparging. The proposed ICA for AnMBRs for urban wastewater treatment enables the optimization of this new technology to be achieved with a high level of process robustness towards disturbances.
Development of a sparging technique for volatile emissions from potato (Solanum tuberosum)
NASA Technical Reports Server (NTRS)
Berdis, Elizabeth; Peterson, Barbara Vieux; Yorio, Neil C.; Batten, Jennifer; Wheeler, Raymond M.
1993-01-01
Accumulation of volatile emissions from plants grown in tightly closed growth chambers may have allelopathic or phytotoxic properties. Whole air analysis of a closed chamber includes both biotic and abiotic volatile emissions. A method for characterization and quantification of biogenic emissions solely from plantlets was developed to investigate this complex mixture of volatile organic compounds. Volatile organic compounds from potato (Solanum tuberosum L. cv. Norland) were isolated, separated and identified using an in-line configuration consisting of a purge and trap concentrator with sparging vessels coupled to a GC/MS system. Analyses identified plant volatile compounds: transcaryophyllene, alpha-humulene, thiobismethane, hexanal, cis-3-hexen-1-ol, and cis-3-hexenyl acetate.
A super high-rate sulfidogenic system for saline sewage treatment.
Tsui, To-Hung; Chen, Lin; Hao, Tianwei; Chen, Guang-Hao
2016-11-01
This study proposes a novel approach to resolve the challenging issue of sludge bed clogging in a granular sulfate-reducing upflow sludge bed (GSRUSB) reactor by means of introducing intermittent gas sparging to advance it into a super high-rate anaerobic bioreactor. Over a 196-day lab-scale trial, the GSRUSB system was operated from nominal hydraulic retention time of 4-hr to 40-min and achieved the highest organic loading rate of 13.31 kg COD/m 3 ·day which is substantially greater than the typical loading of 2.0-3.5 kg COD/m 3 ·day in a conventional upflow anaerobic sludge bed reactor treating dilute organic strength wastewater. The average organic removal efficiency and total dissolved sulfide of this system were 90 ± 4.2% and 158 ± 28 mg S/L, while organics residual in the effluent was 34 ± 14 mg COD/L. The control stage (without gas sparging) revealed that the sludge bed clogging happened concomitantly with the significant drop in extracellular polymeric substance content of granular sludge, through relevant chemical measurements and confocal laser scanning microscopy analyses. On the other hand, compared with increasing the effluent recirculation ratio (from 1.4 to 5), the three-dimensional computational fluid dynamics modeling in combination with energy dissipation analysis demonstrated that the gas sparging (at a superficial gas velocity of 0.8 m s -1 ) can create a 23 times higher liquid shear as well as enhanced particle attrition. Overall, this study not only developed a super high-rate anaerobic bioreactor for saline sewage treatment, but also shed light on the role of intermittent gas sparging in control of sludge bed clogging for anaerobic bioreactors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lee, Hwan; Lee, Yoonjin; Kim, Jaeyoung; Kim, Choltae
2014-01-01
In this study the full-scale operation of soil flushing with air sparging to improve the removal efficiency of petroleum at depths of less than 7 m at a military site in Korea was evaluated. The target area was polluted by multiple gasoline and diesel fuel sources. The soil was composed of heterogeneous layers of granules, sand, silt and clay. The operation factors were systemically assessed using a column test and a pilot study before running the full-scale process at the site. The discharged TPH and BTEX (benzene, toluene, ethylbenzene, and xylenes) concentrations in the water were highest at 20 min and at a rate of 350 L/min, which was selected as the volume of air for the full-scale operation in the pilot air sparging test. The surfactant-aid condition was 1.4 times more efficient than the non-surfactant condition in the serial operations of modified soil flushing followed by air sparging. The hydraulic conductivity (3.13 × 10−3 cm/s) increased 4.7 times after the serial operation of both processes relative to the existing condition (6.61 × 10−4 cm/s). The removal efficiencies of TPH were 52.8%, 57.4%, and 61.8% for the soil layers at 6 to 7, 7 to 8 and 8 to 9 m, respectively. Therefore, the TPH removal was improved at depth of less than 7 m by using this modified remediation system. The removal efficiencies for the areas with TPH and BTEX concentrations of more than 500 and 80 mg/kg, were 55.5% and 92.9%, respectively, at a pore volume of 2.9. The total TPH and BTEX mass removed during the full-scale operation was 5109 and 752 kg, respectively. PMID:25166919
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roscioli-Johnson, Kristyn M.; Zarzana, Christopher A.; Groenewold, Gary S.
In this paper, solutions of N,N-didodecyl-N',N'-dioctyldiglycolamide in n-dodecane were subjected to γ-irradiation in the presence and absence of both an aqueous nitric acid phase and air sparging. The solutions were analyzed using ultra-high-performance liquid chromatography-electrospray ionization-mass spectrometry (UHPLC-ESI-MS) to determine the rates of radiolytic decay of the extractant as well as to identify radiolysis products. The DGA concentration decreased exponentially with increasing dose, and the measured degradation rate constants were uninfluenced by the presence or absence of acidic aqueous phase, or by air sparging. Finally, the identified radiolysis products suggest that the bonds most vulnerable to radiolytic attack are thosemore » in the diglycolamide center of these molecules and not in the side chains.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carnahan, T.G.; Kazonich, G.; Raddatz, A.E.
The U.S. Bureau of Mines conducted a bench-scale study to delineate the important parameters in a three-step process to produce commercial-quality tungsten carbide (WC) directly from tungsten minerals. In the process, tungsten concentrates of wolframite or wolframite and scheelite are decomposed at 1,050{sup 0}C in a molten mixture of NcCl and Na{sub 2}SiO{sub 3} that forms two immiscible phases. Tungsten, as sodium tungstate, reports to the halide phase and is separated from the gangue constituents, which report to the silicate phase. After decanting to separate the two phases, natural gas is sparged into the molten halide phase a 1,070{sup 0}C.more » Submicrometer crystals of WC are initially produced. These crystals grow into thin triangular-shaped plates up to 100 {mu}m on a side or into popcorn-shaped conglomerates. Sparged WC was examined for its suitability for use in sintered carbide products. In physical evaluations, sparged WC ground to an average particle size of 1.52 {mu}m and compacted with 10 pct Co binder into standard 6-by 22-mm test bars had a density of 14.35 and a Rockwell A hardness of 89.6. This compared favorably with 14.39 and 89.7 respectively, for test bars made from a standard commercial 1.52-{mu}m WC powder. Test bars made from Bureau of Mines WC had no C'' porosity or eta phase.« less
In-situ remediation system for volatile organic compounds with deep recharge mechanism
Jackson, Jr., Dennis G.; Looney, Brian B.; Nichols, Ralph L.; Phifer, Mark A.
2001-01-01
A method and apparatus for the treatment and remediation of a contaminated aquifer in the presence of an uncontaminated aquifer at a different hydraulic potential. The apparatus consists of a wellbore inserted through a first aquifer and into a second aquifer, an inner cylinder within the wellbore is supported and sealed to the wellbore to prevent communication between the two aquifers. Air injection is used to sparge the liquid having the higher static water level and, to airlift it to a height whereby it spills into the inner cylinder. The second treatment area provides treatment in the form of aeration or treatment with a material. Vapor stripped in sparging is vented to the atmosphere. Treated water is returned to the aquifer having the lower hydraulic potential.
Green Remediation Best Management Practices: Soil Vapor Extraction & Air Sparging
Historically, approximately one-quarter of Superfund source control projects have involved soil vapor extraction (SVE) to remove volatile organic compounds (VOCs) sorbed to soil in the unsaturated (vadose) zone.
A Study of the γ-Radiolysis of N,N-Didodecyl-N',N'-Dioctyldiglycolamide Using UHPLC-ESI-MS Analysis
Roscioli-Johnson, Kristyn M.; Zarzana, Christopher A.; Groenewold, Gary S.; ...
2016-07-12
In this paper, solutions of N,N-didodecyl-N',N'-dioctyldiglycolamide in n-dodecane were subjected to γ-irradiation in the presence and absence of both an aqueous nitric acid phase and air sparging. The solutions were analyzed using ultra-high-performance liquid chromatography-electrospray ionization-mass spectrometry (UHPLC-ESI-MS) to determine the rates of radiolytic decay of the extractant as well as to identify radiolysis products. The DGA concentration decreased exponentially with increasing dose, and the measured degradation rate constants were uninfluenced by the presence or absence of acidic aqueous phase, or by air sparging. Finally, the identified radiolysis products suggest that the bonds most vulnerable to radiolytic attack are thosemore » in the diglycolamide center of these molecules and not in the side chains.« less
Remediation of chlorinated solvent plumes using in-situ air sparging--a 2-D laboratory study.
Adams, Jeffrey A; Reddy, Krishna R; Tekola, Lue
2011-06-01
In-situ air sparging has evolved as an innovative technique for soil and groundwater remediation impacted with volatile organic compounds (VOCs), including chlorinated solvents. These may exist as non-aqueous phase liquid (NAPL) or dissolved in groundwater. This study assessed: (1) how air injection rate affects the mass removal of dissolved phase contamination, (2) the effect of induced groundwater flow on mass removal and air distribution during air injection, and (3) the effect of initial contaminant concentration on mass removal. Dissolved-phase chlorinated solvents can be effectively removed through the use of air sparging; however, rapid initial rates of contaminant removal are followed by a protracted period of lower removal rates, or a tailing effect. As the air flow rate increases, the rate of contaminant removal also increases, especially during the initial stages of air injection. Increased air injection rates will increase the density of air channel formation, resulting in a larger interfacial mass transfer area through which the dissolved contaminant can partition into the vapor phase. In cases of groundwater flow, increased rates of air injection lessened observed downward contaminant migration effect. The air channel network and increased air saturation reduced relative hydraulic conductivity, resulting in reduced groundwater flow and subsequent downgradient contaminant migration. Finally, when a higher initial TCE concentration was present, a slightly higher mass removal rate was observed due to higher volatilization-induced concentration gradients and subsequent diffusive flux. Once concentrations are reduced, a similar tailing effect occurs.
FIELD ASSESSMENT OF MULTIPLE DNAPL REMEDIATION TECHNIQUES
Five DNAPL remediation technologies were evaluated in constructed test cells at the Dover National Test Site, Dover AFB, Delaware. The technologies were cosolvent solubilization, cosolvent mobilization, surfactant solubilization, complex sugar flushing and air sparging/soil vapor...
FIELD EVALUATION OF DNAPL EXTRACTION TECHNOLOGIES: PROJECT OVERVIEW
Five DNAPL remediation technologies were evaluated at the Dover National Test Site, Dover AFB, Delaware. The technologies were cosolvent solubilization, cosolvent mobilization, surfactant solubilization, complex sugar flushing and air sparging/soil vapor extraction. The effectiv...
Current Development in Treatment and Hydrogen Energy Conversion of Organic Solid Waste
NASA Astrophysics Data System (ADS)
Shin, Hang-Sik
2008-02-01
This manuscript summarized current developments on continuous hydrogen production technologies researched in Korea advanced institute of science and technology (KAIST). Long-term continuous pilot-scale operation of hydrogen producing processes fed with non-sterile food waste exhibited successful results. Experimental findings obtained by the optimization processes of growth environments for hydrogen producing bacteria, the development of high-rate hydrogen producing strategies, and the feasibility tests for real field application could contribute to the progress of fermentative hydrogen production technologies. Three major technologies such as controlling dilution rate depending on the progress of acidogenesis, maintaining solid retention time independently from hydraulic retention time, and decreasing hydrogen partial pressure by carbon dioxide sparging could enhance hydrogen production using anaerobic leaching beds reactors and anaerobic sequencing batch reactors. These findings could contribute to stable, reliable and effective performances of pilot-scale reactors treating organic wastes.
Remediation Technology for Contaminated Groundwater
Bioremediation is the most commonly selected technology for remediation of ground water at Superfund sites in the USA. The next most common technology is Chemical treatment, followed by Air Sparging, and followed by Permeable Reactive Barriers. This presentation reviews the the...
Optimization of scintillator loading with the tellurium-130 isotope for long-term stability
NASA Astrophysics Data System (ADS)
Duhamel, Lauren; Song, Xiaoya; Goutnik, Michael; Kaptanoglu, Tanner; Klein, Joshua; SNO+ Collaboration
2017-09-01
Tellurium-130 was selected as the isotope for the SNO + neutrinoless double beta decay search, as 130Te decays to 130Xe via double beta decay. Linear alkyl benzene(LAB) is the liquid scintillator for the SNO + experiment. To load tellurium into scintillator, it is combined with 1,2-butanediol to form an organometallic complex, commonly called tellurium butanediol (TeBD). This study focuses on maximizing the percentage of tellurium loaded into scintillator and evaluates the complex's long-term stability. Studies on the effect of nucleation due to imperfections in the detector's surface and external particulates were employed by filtration and induced nucleation. The impact of water on the stability of TeBD complex was evaluated by liquid-nitrogen sparging, variability in pH and induced humidity. Alternative loading methods were evaluated, including the addition of stability-inducing organic compounds. Samples of tellurium-loaded scintillator were synthesized, treated, and consistently monitored in a controlled environment. It was found that the hydronium ions cause precipitation in the loaded scintillator, demonstrating that water has a detrimental effect on long-term stability. Optimization of loaded scintillator stability can contribute to the SNO + double beta decay search.
Air-Based Remediation Workshop - Section 4 In Situ Air Sparging
Pursuant to the EPA-AIT Implementing Arrangement 7 for Technical Environmental Collaboration, Activity 11 "Remediation of Contaminated Sites," the USEPA Office of International Affairs Organized a Forced Air Remediation Workshop in Taipei to deliver expert training to the Environ...
Method for converting UF5 to UF4 in a molten fluoride salt
Bennett, Melvin R.; Bamberger, Carlos E.; Kelmers, A. Donald
1977-01-01
The reduction of UF.sub.5 to UF.sub.4 in a molten fluoride salt by sparging with hydrogen is catalyzed by metallic platinum. The reaction is also catalyzed by platinum alloyed with gold reaction equipment.
FINE PORE DIFFUSER SYSTEM EVALUATION FOR THE GREEN BAY METROPOLITAN SEWERAGE DISTRICT
The Green Bay Metropolitan Sewerage District retrofitted two quadrants of their activated sludge aeration system with ceramic and membrane fine pore diffusers to provide savings in energy usage compared to the sparged turbine aerators originally installed. Because significant di...
In Situ Biodegradation of MTBE and TBA
Ground water at most UST spills sites in Kansas contains both MTBE and benzene, and both contaminants must be effectively treated to close the sites. Soil vacuum extraction, and air sparging are common treatment technologies in Kansas. The technologies supply oxygen to support ...
OXYGEN-18 STUDY OF SO2 OXIDATION IN RAINWATER BY PEROXIDES
A new analytical method was developed for the determination of oxygen isotope ratios in peroxides in rainwater. In the method, rainwater samples were quantitatively degassed of dissolved air by a combined treatment of evacuation, ultrasonic agitation, and helium sparging (VUS), f...
Phosphate interference during in situ treatment for arsenic in groundwater.
Brunsting, Joseph H; McBean, Edward A
2014-01-01
Contamination of groundwater by arsenic is a problem in many areas of the world, particularly in West Bengal (India) and Bangladesh, where reducing conditions in groundwater are the cause. In situ treatment is a novel approach wherein, by introduction of dissolved oxygen (DO), advantages over other treatment methods can be achieved through simplicity, not using chemicals, and not requiring disposal of arsenic-rich wastes. A lab-scale test of in situ treatment by air sparging, using a solution with approximately 5.3 mg L(-1) ferrous iron and 200 μg L(-1) arsenate, showed removal of arsenate in the range of 59%. A significant obstacle exists, however, due to the interference of phosphate since phosphate competes for adsorption sites on oxidized iron precipitates. A lab-scale test including 0.5 mg L(-1) phosphate showed negligible removal of arsenate. In situ treatment by air sparging demonstrates considerable promise for removal of arsenic from groundwater where iron is present in considerable quantities and phosphates are low.
Xie, K; Lin, H J; Mahendran, B; Bagley, D M; Leung, K T; Liss, S N; Liao, B Q
2010-04-14
Submerged anaerobic membrane bioreactor (SAnMBR) technology was studied for kraft evaporator condensate treatment at 37 +/- 1 degrees C over a period of 9 months. Under tested organic loading rates of 1-24 kg COD/m3/day, a chemical oxygen demand (COD) removal efficiency of 93-99% was achieved with a methane production rate of 0.35 +/- 0.05 L methane/g COD removed and a methane content of 80-90% in produced biogas. Bubbling of recycled biogas was effective for in-situ membrane cleaning, depending on the biogas sparging rate used. The membrane critical flux increased and the membrane fouling rate decreased with an increase in the biogas sparging rate. The scanning electron microscopy images showed membrane pore clogging was not significant and sludge cake formation on the membrane surface was the dominant mechanism of membrane fouling. The results suggest that the SAnMBR is a promising technology for energy recovery from kraft evaporator condensate.
Murphy, Ryan P; Kelley, Elizabeth G; Rogers, Simon A; Sullivan, Millicent O; Epps, Thomas H
2014-11-18
Chain exchange between block polymer micelles in highly selective solvents, such as water, is well-known to be arrested under quiescent conditions, yet this work demonstrates that simple agitation methods can induce rapid chain exchange in these solvents. Aqueous solutions containing either pure poly(butadiene- b -ethylene oxide) or pure poly(butadiene- b -ethylene oxide- d 4 ) micelles were combined and then subjected to agitation by vortex mixing, concentric cylinder Couette flow, or nitrogen gas sparging. Subsequently, the extent of chain exchange between micelles was quantified using small angle neutron scattering. Rapid vortex mixing induced chain exchange within minutes, as evidenced by a monotonic decrease in scattered intensity, whereas Couette flow and sparging did not lead to measurable chain exchange over the examined time scale of hours. The linear kinetics with respect to agitation time suggested a surface-limited exchange process at the air-water interface. These findings demonstrate the strong influence of processing conditions on block polymer solution assemblies.
In situ bioventing at a natural gas dehydrator site: Field demonstration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, A.W.; Miller, D.L.; Miller, J.A.
1995-12-31
This paper describes a bioventing/biosparging field demonstration that was conducted over a 10-month period at a former glycol dehydrator site located near Traverse City, Michigan. The goal of the project was to determine the feasibility of this technology for dehydrator site remediation and to develop engineering design concepts for applying bioventing/biosparging at similar sites. The chemicals of interest are benzene, toluene, ethylbenzene, and xylenes (BTEX) and alkanes. Soil sampling indicated that the capillary fringe and saturated zones were heavily contaminated, but that the unsaturated zone was relatively free of the contaminants. A pump-and-treat system has operated since 1991 to treatmore » the groundwater BTEX plume. Bioventing/biosparging was installed in September 1993 to treat the contaminant source area. Three different air sparging operating modes were tested to determine an optimal process configuration for site remediation. These operational modes were compared through in situ respirometry studies. Respirometry measurements were used to estimate biodegradation rates. Dissolved oxygen and carbon dioxide were monitored in the groundwater.« less
40 CFR 63.1256 - Standards: Wastewater.
Code of Federal Regulations, 2012 CFR
2012-07-01
... combination of the approaches in paragraphs (a)(1)(i) and (ii) of this section for different affected... tank are heated, treated by means of an exothermic reaction, or sparged, during which time the owner or...) at all times that the wastewater tank contains affected wastewater or residual removed from affected...
40 CFR 63.1256 - Standards: Wastewater.
Code of Federal Regulations, 2014 CFR
2014-07-01
... use a combination of the approaches in paragraphs (a)(1)(i) and (ii) of this section for different... tank are heated, treated by means of an exothermic reaction, or sparged, during which time the owner or...) at all times that the wastewater tank contains affected wastewater or residual removed from affected...
40 CFR 63.1256 - Standards: Wastewater.
Code of Federal Regulations, 2013 CFR
2013-07-01
... use a combination of the approaches in paragraphs (a)(1)(i) and (ii) of this section for different... tank are heated, treated by means of an exothermic reaction, or sparged, during which time the owner or...) at all times that the wastewater tank contains affected wastewater or residual removed from affected...
REACTOR FUEL ELEMENTS TESTING CONTAINER
Whitham, G.K.; Smith, R.R.
1963-01-15
This patent shows a method for detecting leaks in jacketed fuel elements. The element is placed in a sealed tank within a nuclear reactor, and, while the reactor operates, the element is sparged with gas. The gas is then led outside the reactor and monitored for radioactive Xe or Kr. (AEC)
Although Ethylene Dibromide (EDB) was banned in conventional motor fuel in the USA by 1990, EDB continues to contaminate ground water at many old gasoline service station sites. Although EDB contamination is widespread, there is little performance data on technology to remediat...
Advanced Information Technology in Simulation Based Life Cycle Design
NASA Technical Reports Server (NTRS)
Renaud, John E.
2003-01-01
In this research a Collaborative Optimization (CO) approach for multidisciplinary systems design is used to develop a decision based design framework for non-deterministic optimization. To date CO strategies have been developed for use in application to deterministic systems design problems. In this research the decision based design (DBD) framework proposed by Hazelrigg is modified for use in a collaborative optimization framework. The Hazelrigg framework as originally proposed provides a single level optimization strategy that combines engineering decisions with business decisions in a single level optimization. By transforming this framework for use in collaborative optimization one can decompose the business and engineering decision making processes. In the new multilevel framework of Decision Based Collaborative Optimization (DBCO) the business decisions are made at the system level. These business decisions result in a set of engineering performance targets that disciplinary engineering design teams seek to satisfy as part of subspace optimizations. The Decision Based Collaborative Optimization framework more accurately models the existing relationship between business and engineering in multidisciplinary systems design.
The contamination of the subsurface environment by dense non-aqueous phase liquids (DNAPL) is a wide-spread problem that poses a significant threat to soil and groundwater quality. Implementing different remediation techniques can lead to the removal of a high fraction of the DNA...
Ground water at most UST spills sites in Kansas contains both MTBE and benzene, and both contaminants must be effectively treated to close the sites. Soil vacuum extraction, air sparging, and excavation are the most common treatment technologies in Kansas. To compare the relati...
40 CFR 63.133 - Process wastewater provisions-wastewater tanks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... exothermic reaction or the contents of the tank is sparged, the owner or operator shall comply with the... specify a schedule of actions that will ensure that the wastewater tank will be emptied as soon as... that alternate storage capacity is unavailable, and shall specify a schedule of actions that will...
Robles, A; Ruano, M V; García-Usach, F; Ferrer, J
2012-06-01
A submerged anaerobic MBR demonstration plant with two commercial hollow-fibre ultrafiltration systems (PURON®, Koch Membrane Systems, PUR-PSH31) was operated using municipal wastewater at high levels of mixed liquor total solids (MLTS) (above 22 g L(-1)). A modified flux-step method was applied to assess the critical flux (J(C)) at different gas sparging intensities. The results showed a linear dependency between J(C) and the specific gas demand per unit of membrane area (SGD(m)). J(C) ranged from 12 to 19 LMH at SGD(m) values of between 0.17 and 0.5 Nm(3) h(-1) m(-2), which are quite low in comparison to aerobic MBR. Long-term trials showed that the membranes operated steadily at fluxes close to the estimated J(C), which validates the J(C) obtained by this method. After operating the membrane for almost 2 years at sub-critical levels, no irreversible fouling problems were detected, and therefore, no chemical cleaning was conducted. Copyright © 2012 Elsevier Ltd. All rights reserved.
A dense cell retention culture system using stirred ceramic membrane reactor.
Suzuki, T; Sato, T; Kominami, M
1994-11-20
A novel reactor design incorporating porous ceramic tubes into a stirred jar fermentor was developed. The stirred ceramic membrane reactor has two ceramic tubular membrane units inside the vessel and maintains high filtration flux by alternating use for filtering and recovering from clogging. Each filter unit was linked for both extraction of culture broth and gas sparging. High permeability was maintained for long periods by applying the periodical control between filtering and air sparging during the stirred retention culture of Saccharomyces cerevisiae. The ceramic filter aeration system increased the k(L)a to about five times that of ordinary gas sparing. Using the automatic feeding and filtering system, cell mass concentration reached 207 g/L in a short time, while it was 64 g/L in a fed-batch culture. More than 99% of the growing cells were retained in the fermentor by the filtering culture. Both yield and productivity of cells were also increased by controlling the feeding of fresh medium and filtering the supernatant of the dense cells culture. (c) 1994 John Wiley & Sons, Inc.
Wade, W N; Scouten, A J; McWatters, K H; Wick, R L; Demirci, A; Fett, W F; Beuchat, L R
2003-01-01
A study was done to determine the efficacy of aqueous ozone treatment in killing Listeria monocytogenes on inoculated alfalfa seeds and sprouts. Reductions in populations of naturally occurring aerobic microorganisms on sprouts and changes in the sensory quality of sprouts were also determined. The treatment (10 or 20 min) of seeds in water (4 degrees C) containing an initial concentration of 21.8 +/- 0.1 microg/ml of ozone failed to cause a significant (P < or = 0.05) reduction in populations of L. monocytogenes. The continuous sparging of seeds with ozonated water (initial ozone concentration of 21.3 +/- 0.2 microg/ml) for 20 min significantly reduced the population by 1.48 log10 CFU/g. The treatment (2 min) of inoculated alfalfa sprouts with water containing 5.0 +/- 0.5, 9.0 +/- 0.5, or 23.2 +/- 1.6 microg/ml of ozone resulted in significant (P < or = 0.05) reductions of 0.78, 0.81, and 0.91 log10 CFU/g, respectively, compared to populations detected on sprouts treated with water. Treatments (2 min) with up to 23.3 +/- 1.6 microg/ml of ozone did not significantly (P > 0.05) reduce populations of aerobic naturally occurring microorganisms. The continuous sparging of sprouts with ozonated water for 5 to 20 min caused significant reductions in L. monocytogenes and natural microbiota compared to soaking in water (control) but did not enhance the lethality compared to the sprouts not treated with continuous sparging. The treatment of sprouts with ozonated water (20.0 microg/ml) for 5 or 10 min caused a significant deterioration in the sensory quality during subsequent storage at 4 degrees C for 7 to 11 days. Scanning electron microscopy of uninoculated alfalfa seeds and sprouts showed physical damage, fungal and bacterial growth, and biofilm formation that provide evidence of factors contributing to the difficulty of killing microorganisms by treatment with ozone and other sanitizers.
Both MtBE and Benzene are present at over 86% of the Underground Storage Tank sites in Kansas, USA that require active remediation. In situ remedial technologies, consisting primarily of soil vapor extraction and air sparging, are the preferred choice for treatment for MtBE site...
Use of Cometabolic Air Sparging to Remediate Chloroethene-Contaminated Groundwater Aquifers
2001-07-31
sampling event, the temperature, dew point , and relative humidity of the soil gas were analyzed using a Control Company Digital Hygrometer/Thermometer...4.2.1.3 Groundwater and Soil- Gas Multi-Level Monitoring Points .................... 20 4.2.1.4 Groundwater Monitoring Wells...C-1 APPENDIX D: SOIL- GAS MONITORING POINT DATA........................................................D-1 APPENDIX E: HISTORICAL
Optimal policy for value-based decision-making.
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-08-18
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.
Optimal policy for value-based decision-making
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-01-01
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638
Hawkins, Aaron B.; Lian, Hong; Zeldes, Benjamin M.; Loder, Andrew J.; Lipscomb, Gina L.; Schut, Gerrit J.; Keller, Matthew W.; Adams, Michael W.W.; Kelly, Robert M.
2015-01-01
Metabolically engineered strains of the hyperthermophile Pyrococcus furiosus(Topt 95-100°C), designed to produce 3-hydroxypropionate (3HP) from maltose and CO2 using enzymes from the Metallosphaera sedula (Topt73°C) carbon fixation cycle, were examined with respect to the impact of heterologous gene expression on metabolic activity, fitness at optimal and sub-optimal temperatures, gas-liquid mass transfer in gas-intensive bioreactors, and potential bottlenecks arising from product formation. Transcriptomic comparisons of wild-type P. furiosus, a genetically-tractable, naturally-competent mutant (COM1), and COM1-based strains engineered for 3HP production revealed numerous differences after being shifted from 95°C to 72°C, where product formation catalyzed by the heterologously-produced M. sedula enzymes occurred. At 72°C, significantly higher levels of metabolic activity and a stress response were evident in 3HP-forming strains compared to the non-producing parent strain (COM1). Gas-liquid mass transfer limitations were apparent, given that 3HP titers and volumetric productivity in stirred bioreactors could be increased over 10-fold by increased agitation and higher CO2 sparging rates, from 18 mg/L to 276 mg/L and from 0.7 mg/L/hr to 11 mg/L/hr, respectively. 3HP formation triggered transcription of genes for protein stabilization and turnover, RNA degradation, and reactive oxygen species detoxification. The results here support the prospects of using thermally diverse sources of pathways and enzymes in metabolically engineered strains designed for product formation at sub-optimal growth temperatures. PMID:25753826
USDA-ARS?s Scientific Manuscript database
Previously it was shown that the gas produced in an ethanol fermentor using either corn or barley as feedstock could be sparged directly into an adjacent fermentor using Escherichia coli AFP184 to provide the CO2 required for succinic acid production. In the present investigation it has been demons...
RECOVERY OF ACTINIDES FROM AQUEOUS NITRIC ACID SOLUTIONS
Ader, M.
1963-11-19
A process of recovering actinides is presented. Tetravalent actinides are extracted from rare earths in an aqueous nitric acid solution with a ketone and back-extracted from the ketone into an aqueous medium. The aqueous actinide solution thus obtained, prior to concentration by boiling, is sparged with steam to reduce its ketone to a maximum content of 3 grams per liter. (AEC)
2005-11-01
101 Task 6 - Incorporation of the heterogeneity enhanced mechanisms in the UTCHEM numerical simulator...hydrogen sparging in a bench scale three-dimensional sand pack model. (6) Incorporation of the heterogeneity enhanced mechanisms in the UTCHEM ...Incorporation of the heterogeneity enhanced mechanisms in the UTCHEM numerical simulator. Simulation model for foam in porous media and
A novel medical information management and decision model for uncertain demand optimization.
Bi, Ya
2015-01-01
Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.
Uranium (III)-Plutonium (III) co-precipitation in molten chloride
NASA Astrophysics Data System (ADS)
Vigier, Jean-François; Laplace, Annabelle; Renard, Catherine; Miguirditchian, Manuel; Abraham, Francis
2018-02-01
Co-management of the actinides in an integrated closed fuel cycle by a pyrochemical process is studied at the laboratory scale in France in the CEA-ATALANTE facility. In this context the co-precipitation of U(III) and Pu(III) by wet argon sparging in LiCl-CaCl2 (30-70 mol%) molten salt at 705 °C is studied. Pu(III) is prepared in situ in the molten salt by carbochlorination of PuO2 and U(III) is then introduced as UCl3 after chlorine purge by argon to avoid any oxidation of uranium up to U(VI) by Cl2. The oxide conversion yield through wet argon sparging is quantitative. However, the preferential oxidation of U(III) in comparison to Pu(III) is responsible for a successive conversion of the two actinides, giving a mixture of UO2 and PuO2 oxides. Surprisingly, the conversion of sole Pu(III) in the same conditions leads to a mixture of PuO2 and PuOCl, characteristic of a partial oxidation of Pu(III) to Pu(IV). This is in contrast with coconversion of U(III)-Pu(III) mixtures but in agreement with the conversion of Ce(III).
Heuristic and optimal policy computations in the human brain during sequential decision-making.
Korn, Christoph W; Bach, Dominik R
2018-01-23
Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.
Real-Time Optimal Flood Control Decision Making and Risk Propagation Under Multiple Uncertainties
NASA Astrophysics Data System (ADS)
Zhu, Feilin; Zhong, Ping-An; Sun, Yimeng; Yeh, William W.-G.
2017-12-01
Multiple uncertainties exist in the optimal flood control decision-making process, presenting risks involving flood control decisions. This paper defines the main steps in optimal flood control decision making that constitute the Forecast-Optimization-Decision Making (FODM) chain. We propose a framework for supporting optimal flood control decision making under multiple uncertainties and evaluate risk propagation along the FODM chain from a holistic perspective. To deal with uncertainties, we employ stochastic models at each link of the FODM chain. We generate synthetic ensemble flood forecasts via the martingale model of forecast evolution. We then establish a multiobjective stochastic programming with recourse model for optimal flood control operation. The Pareto front under uncertainty is derived via the constraint method coupled with a two-step process. We propose a novel SMAA-TOPSIS model for stochastic multicriteria decision making. Then we propose the risk assessment model, the risk of decision-making errors and rank uncertainty degree to quantify the risk propagation process along the FODM chain. We conduct numerical experiments to investigate the effects of flood forecast uncertainty on optimal flood control decision making and risk propagation. We apply the proposed methodology to a flood control system in the Daduhe River basin in China. The results indicate that the proposed method can provide valuable risk information in each link of the FODM chain and enable risk-informed decisions with higher reliability.
Lyon, W.L.
1962-04-17
A method of separating uranium oxides from PuO/sub 2/, ThO/sub 2/, and other actinide oxides is described. The oxide mixture is suspended in a fused salt melt and a chlorinating agent such as chlorine gas or phosgene is sparged through the suspension. Uranium oxides are selectively chlorinated and dissolve in the melt, which may then be filtered to remove the unchlorinated oxides of the other actinides. (AEC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFreniere, Lorraine M.
The CCC/USDA is currently implementing a KDHE-approved interim measure (IM) to address the contamination identified on its former property. This source control IM consists of large-diameter boreholes coupled with soil vapor extraction (SVE) and air sparging (AS). The CCC/USDA completed installation of the IM in May 2009. Assessment of the performance and effectiveness of the IM is being reported separately. Pro-Ag is conducting its own site investigation (KDHE 2011c).
1994-06-01
technologies were organized into five categories: * In Situ Biological Treatment * In Situ Physical/Chemical Treatment * Ex Situ Biological Groundwater...Technology FIGURE 11-3 PRIMARY SCORING SUMMARY EX SITU BIOLOGICAL GROUNDWATER TREATMENT TECHNOLOGIES GROUNDWATER OPERABLE UNIT RIIFS McCLELLAN AIR FORCE... Biological Treatment CometabolicAnaerobic Anaerobic/Aerobic In Situ Physical/Chemical Treatment Sparging/Soil Vapor Extraction Ex Situ Biological
Purification of used eutectic (LiCl-KCl) salt electrolyte from pyroprocessing
NASA Astrophysics Data System (ADS)
Cho, Yung-Zun; Lee, Tae-Kyo; Eun, Hee-Chul; Choi, Jung-Hoon; Kim, In-Tae; Park, Geun-Il
2013-06-01
The separation characteristics of surrogate rare-earth fission products in a eutectic (LiCl-KCl) molten salt were investigated. This system is based on the eutectic salt used for the pyroprocessing treatment of used nuclear fuel (UNF). The investigation was performed using an integrated rare-earth separation apparatus comprising a precipitation reactor, a solid detachment device, and a layer separation device. To separate rare-earth fission products, a phosphate precipitation method using both Li3PO4 and K3PO4 as a precipitant was performed. The use of an equivalent phosphate precipitant composed of 0.408 molar ratio-K3PO4 and 0.592 molar ratio-Li3PO4 can preserve the original eutectic ratio, LiCl-0.592 molar ratio (or 45.2 wt%), as well as provide a high separation efficiency of over 99.5% under conditions of 550 °C and Ar sparging when using La, Nd, Ce, and Pr chlorides. The mixture of La, Nd, Ce, and Pr phosphate had a typical monoclinic (or monazite) structure, which has been proposed as a reliable host matrix for the permanent disposal of a high-level waste form. To maximize the reusability of purified eutectic waste salt after rare-earth separation, the successive rare-earth separation process, which uses both phosphate precipitation and an oxygen sparging method, were introduced and tested with eight rare-earth (Y, La, Ce, Pr, Nd, Sm, Eu and Gd) chlorides. In the successive rare-earth separation process, the phosphate reaction was terminated within 1 h at 550 °C, and a 4-8 h oxygen sparging time were required to obtain over a 99% separation efficiency at 700-750 °C. The mixture of rare-earth precipitates separated by the successive rare-earth separation process was found to be phosphate, oxychloride, and oxide. Through the successive rare-earth separation process, the eutectic ratio of purified salt maintained its original value, and impurity content including the residual precipitant of purified salt can be minimized.
Optimization and resilience in natural resources management
Williams, Byron K.; Johnson, Fred A.
2015-01-01
We consider the putative tradeoff between optimization and resilience in the management of natural resources, using a framework that incorporates different sources of uncertainty that are common in natural resources management. We address one-time decisions, and then expand the decision context to the more complex problem of iterative decision making. For both cases we focus on two key sources of uncertainty: partial observability of system state and uncertainty as to system dynamics. Optimal management strategies will vary considerably depending on the timeframe being considered and the amount and quality of information that is available to characterize system features and project the consequences of potential decisions. But in all cases an optimal decision making framework, if properly identified and focused, can be useful in recognizing sound decisions. We argue that under the conditions of deep uncertainty that characterize many resource systems, an optimal decision process that focuses on robustness does not automatically induce a loss of resilience.
A control-theory model for human decision-making
NASA Technical Reports Server (NTRS)
Levison, W. H.; Tanner, R. B.
1971-01-01
A model for human decision making is an adaptation of an optimal control model for pilot/vehicle systems. The models for decision and control both contain concepts of time delay, observation noise, optimal prediction, and optimal estimation. The decision making model was intended for situations in which the human bases his decision on his estimate of the state of a linear plant. Experiments are described for the following task situations: (a) single decision tasks, (b) two-decision tasks, and (c) simultaneous manual control and decision making. Using fixed values for model parameters, single-task and two-task decision performance can be predicted to within an accuracy of 10 percent. Agreement is less good for the simultaneous decision and control situation.
Constrained optimization via simulation models for new product innovation
NASA Astrophysics Data System (ADS)
Pujowidianto, Nugroho A.
2017-11-01
We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.
Response of a mouse hybridoma cell line to heat shock, agitation, and sparging
NASA Technical Reports Server (NTRS)
Passini, Cheryl A.; Goochee, Charles F.
1989-01-01
A mouse hybridoma cell line is used as a model system for studying the effect of environmental stress on attachment-independent mammalian cells. The full time course of recovery for a mouse hybridoma cell line from both a mild and intermediate heat shock is examined. The pattern of intracellular synthesis is compared for actively growing, log phase cells and nondividing, stationary phase cells.
Growth of plant root cultures in liquid- and gas-dispersed reactor environments.
McKelvey, S A; Gehrig, J A; Hollar, K A; Curtis, W R
1993-01-01
The growth of Agrobacterium transformed "hairy root" cultures of Hyoscyamus muticus was examined in various liquid- and gas-dispersed bioreactor configurations. Reactor runs were replicated to provide statistical comparisons of nutrient availability on culture performance. Accumulated tissue mass in submerged air-sparged reactors was 31% of gyratory shake-flask controls. Experiments demonstrate that poor performance of sparged reactors is not due to bubble shear damage, carbon dioxide stripping, settling, or flotation of roots. Impaired oxygen transfer due to channeling and stagnation of the liquid phase are the apparent causes of poor growth. Roots grown on a medium-perfused inclined plane grew at 48% of gyratory controls. This demonstrates the ability of cultures to partially compensate for poor liquid distribution through vascular transport of nutrients. A reactor configuration in which the medium is sprayed over the roots and permitted to drain down through the root tissue was able to provide growth rates which are statistically indistinguishable (95% T-test) from gyratory shake-flask controls. In this type of spray/trickle-bed configuration, it is shown that distribution of the roots becomes a key factor in controlling the rate of growth. Implications of these results regarding design and scale-up of bioreactors to produce fine chemicals from root cultures are discussed.
Air sparging: Air-water mass transfer coefficients
NASA Astrophysics Data System (ADS)
Braida, Washington J.; Ong, Say Kee
1998-12-01
Experiments investigating the mass transfer of several dissolved volatile organic compounds (VOCs) across the air-water interface were conducted using a single-air- channel air-sparging system. Three different porous media were used in the study. Air velocities ranged from 0.2 cm s-1 to 2.5 cm s-1. The tortuosity factor for each porous medium and the air-water mass transfer coefficients were estimated by fitting experimental data to a one-dimensional diffusion model. The estimated mass transfer coefficients KG ranged from 1.79 × 10-3 cm min-1 to 3.85 × 10-2 cm min-1. The estimated lumped gas phase mass transfer coefficients KGa were found to be directly related to the air diffusivity of the VOC, air velocity, and particle size, and inversely related to the Henry's law constant of the VOCs. Of the four parameters investigated, the parameter that controlled or had a dominant effect on the lumped gas phase mass transfer coefficient was the air diffusivity of the VOC. Two empirical models were developed by correlating the Damkohler and the modified air phase Sherwood numbers with the air phase Peclet number, Henry's law constant, and the reduced mean particle size of porous media. The correlation developed in this study may be used to obtain better predictions of mass transfer fluxes for field conditions.
Thiol-Disulfide Exchange in Peptides Derived from Human Growth Hormone
Chandrasekhar, Saradha; Epling, Daniel E.; Sophocleous, Andreas M.; Topp, Elizabeth M.
2014-01-01
Disulfide bonds stabilize proteins by crosslinking distant regions into a compact three-dimensional structure. They can also participate in hydrolytic and oxidative pathways to form non-native disulfide bonds and other reactive species. Such covalent modifications can contribute to protein aggregation. Here we present experimental data for the mechanism of thiol-disulfide exchange in tryptic peptides derived from human growth hormone in aqueous solution. Reaction kinetics were monitored to investigate the effect of pH (6.0-10.0), temperature (4-50 °C), oxidation suppressants (EDTA and N2 sparging) and peptide secondary structure (amide cyclized vs. open form). The concentrations of free thiol containing peptides, scrambled disulfides and native disulfide-linked peptides generated via thiol-disulfide exchange and oxidation reactions were determined using RP-HPLC and LC-MS. Concentration vs. time data were fitted to a mathematical model using non-linear least squares regression analysis. At all pH values, the model was able to fit the data with R2≥0.95. Excluding oxidation suppressants (EDTA and N2 sparging) resulted in an increase in the formation of scrambled disulfides via oxidative pathways but did not influence the intrinsic rate of thiol-disulfide exchange. In addition, peptide secondary structure was found to influence the rate of thiol-disulfide exchange. PMID:24549831
Hawkins, Aaron B.; Lian, Hong; Zeldes, Benjamin M.; ...
2015-06-11
In this paper, metabolically engineered strains of the hyperthermophile Pyrococcus furiosus (T opt 95–100°C), designed to produce 3-hydroxypropionate (3HP) from maltose and CO 2 using enzymes from the Metallosphaera sedula (T opt 73°C) carbon fixation cycle, were examined with respect to the impact of heterologous gene expression on metabolic activity, fitness at optimal and sub-optimal temperatures, gas-liquid mass transfer in gas-intensive bioreactors, and potential bottlenecks arising from product formation. Transcriptomic comparisons of wild-type P. furiosus, a genetically-tractable, naturally-competent mutant (COM1), and COM1-based strains engineered for 3HP production revealed numerous differences after being shifted from 95°C to 72°C, where product formationmore » catalyzed by the heterologously-produced M. sedula enzymes occurred. At 72°C, significantly higher levels of metabolic activity and a stress response were evident in 3HP-forming strains compared to the non-producing parent strain (COM1). Gas–liquid mass transfer limitations were apparent, given that 3HP titers and volumetric productivity in stirred bioreactors could be increased over 10-fold by increased agitation and higher CO 2 sparging rates, from 18 mg/L to 276 mg/L and from 0.7 mg/L/h to 11 mg/L/h, respectively. 3HP formation triggered transcription of genes for protein stabilization and turnover, RNA degradation, and reactive oxygen species detoxification. Lastly, the results here support the prospects of using thermally diverse sources of pathways and enzymes in metabolically engineered strains designed for product formation at sub-optimal growth temperatures.« less
Johnson, Tylor J; Zahler, Jacob D; Baldwin, Emily L; Zhou, Ruanbao; Gibbons, William R
2016-07-01
Cyanobacteria are currently being engineered to photosynthetically produce next-generation biofuels and high-value chemicals. Many of these chemicals are highly toxic to cyanobacteria, thus strains with increased tolerance need to be developed. The volatility of these chemicals may necessitate that experiments be conducted in a sealed environment to maintain chemical concentrations. Therefore, carbon sources such as NaHCO3 must be used for supporting cyanobacterial growth instead of CO2 sparging. The primary goal of this study was to determine the optimal initial concentration of NaHCO3 for use in growth trials, as well as if daily supplementation of NaHCO3 would allow for increased growth. The secondary goal was to determine the most accurate method to assess growth of Anabaena sp. PCC 7120 in a sealed environment with low biomass titers and small sample volumes. An initial concentration of 0.5g/L NaHCO3 was found to be optimal for cyanobacteria growth, and fed-batch additions of NaHCO3 marginally improved growth. A separate study determined that a sealed test tube environment is necessary to maintain stable titers of volatile chemicals in solution. This study also showed that a SYTO® 9 fluorescence-based assay for cell viability was superior for monitoring filamentous cyanobacterial growth compared to absorbance, chlorophyll α (chl a) content, and biomass content due to its accuracy, small sampling size (100μL), and high throughput capabilities. Therefore, in future chemical inhibition trials, it is recommended that 0.5g/L NaHCO3 is used as the carbon source, and that culture viability is monitored via the SYTO® 9 fluorescence-based assay that requires minimum sample size. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawkins, Aaron B.; Lian, Hong; Zeldes, Benjamin M.
In this paper, metabolically engineered strains of the hyperthermophile Pyrococcus furiosus (T opt 95–100°C), designed to produce 3-hydroxypropionate (3HP) from maltose and CO 2 using enzymes from the Metallosphaera sedula (T opt 73°C) carbon fixation cycle, were examined with respect to the impact of heterologous gene expression on metabolic activity, fitness at optimal and sub-optimal temperatures, gas-liquid mass transfer in gas-intensive bioreactors, and potential bottlenecks arising from product formation. Transcriptomic comparisons of wild-type P. furiosus, a genetically-tractable, naturally-competent mutant (COM1), and COM1-based strains engineered for 3HP production revealed numerous differences after being shifted from 95°C to 72°C, where product formationmore » catalyzed by the heterologously-produced M. sedula enzymes occurred. At 72°C, significantly higher levels of metabolic activity and a stress response were evident in 3HP-forming strains compared to the non-producing parent strain (COM1). Gas–liquid mass transfer limitations were apparent, given that 3HP titers and volumetric productivity in stirred bioreactors could be increased over 10-fold by increased agitation and higher CO 2 sparging rates, from 18 mg/L to 276 mg/L and from 0.7 mg/L/h to 11 mg/L/h, respectively. 3HP formation triggered transcription of genes for protein stabilization and turnover, RNA degradation, and reactive oxygen species detoxification. Lastly, the results here support the prospects of using thermally diverse sources of pathways and enzymes in metabolically engineered strains designed for product formation at sub-optimal growth temperatures.« less
Treatment of Produced Water Using a Surfactant Modified Zeolite/Vapor Phase Bioreactor System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lynn E. Katz; Kerry A. Kinney; Robert S. Bowman
2006-01-31
Co-produced water from the oil and gas industry accounts for a significant waste stream in the United States. Produced waters typically contain a high total dissolved solids content, dissolved organic constituents such as benzene and toluene, an oil and grease component as well as chemicals added during the oil-production process. It has been estimated that a total of 14 billion barrels of produced water were generated in 2002 from onshore operations (Veil, 2004). Although much of this produced water is disposed via reinjection, environmental and cost considerations can make surface discharge of this water a more practical means of disposal.more » In addition, reinjection is not always a feasible option because of geographic, economic, or regulatory considerations. In these situations, it may be desirable, and often necessary from a regulatory viewpoint, to treat produced water before discharge. It may also be feasible to treat waters that slightly exceed regulatory limits for re-use in arid or drought-prone areas, rather than losing them to reinjection. A previous project conducted under DOE Contract DE-AC26-99BC15221 demonstrated that surfactant modified zeolite (SMZ) represents a potential treatment technology for produced water containing BTEX. Laboratory and field experiments suggest that: (1) sorption of benzene, toluene, ethylbenzene and xylenes (BTEX) to SMZ follows linear isotherms in which sorption increases with increasing solute hydrophobicity; (2) the presence of high salt concentrations substantially increases the capacity of the SMZ for BTEX; (3) competitive sorption among the BTEX compounds is negligible; and, (4) complete recovery of the SMZ sorption capacity for BTEX can be achieved by air sparging the SMZ. This report summarizes research for a follow on project to optimize the regeneration process for multiple sorption/regeneration cycles, and to develop and incorporate a vapor phase bioreactor (VPB) system for treatment of the off-gas generated during air sparging. To this end, we conducted batch and column laboratory SMZ and VPB experiments with synthetic and actual produced waters. Based on the results of the laboratory testing, a pilot scale study was designed and conducted to evaluate the combined SMZ/VPB process. An economic and regulatory feasibility analysis was also completed as part of the current study to assess the viability of the process for various water re-use options.« less
NASA Astrophysics Data System (ADS)
Helbing, Dirk; Schönhof, Martin; Kern, Daniel
2002-06-01
The coordinated and efficient distribution of limited resources by individual decisions is a fundamental, unsolved problem. When individuals compete for road capacities, time, space, money, goods, etc, they normally make decisions based on aggregate rather than complete information, such as TV news or stock market indices. In related experiments, we have observed a volatile decision dynamics and far-from-optimal payoff distributions. We have also identified methods of information presentation that can considerably improve the overall performance of the system. In order to determine optimal strategies of decision guidance by means of user-specific recommendations, a stochastic behavioural description is developed. These strategies manage to increase the adaptibility to changing conditions and to reduce the deviation from the time-dependent user equilibrium, thereby enhancing the average and individual payoffs. Hence, our guidance strategies can increase the performance of all users by reducing overreaction and stabilizing the decision dynamics. These results are highly significant for predicting decision behaviour, for reaching optimal behavioural distributions by decision support systems and for information service providers. One of the promising fields of application is traffic optimization.
Simultaneous Optimization of Decisions Using a Linear Utility Function.
ERIC Educational Resources Information Center
Vos, Hans J.
1990-01-01
An approach is presented to simultaneously optimize decision rules for combinations of elementary decisions through a framework derived from Bayesian decision theory. The developed linear utility model for selection-mastery decisions was applied to a sample of 43 first year medical students to illustrate the procedure. (SLD)
On optimal soft-decision demodulation
NASA Technical Reports Server (NTRS)
Lee, L. N.
1975-01-01
Wozencraft and Kennedy have suggested that the appropriate demodulator criterion of goodness is the cut-off rate of the discrete memoryless channel created by the modulation system; the criterion of goodness adopted in this note is the symmetric cut-off rate which differs from the former criterion only in that the signals are assumed equally likely. Massey's necessary condition for optimal demodulation of binary signals is generalized to M-ary signals. It is shown that the optimal demodulator decision regions in likelihood space are bounded by hyperplanes. An iterative method is formulated for finding these optimal decision regions from an initial good quess. For additive white Gaussian noise, the corresponding optimal decision regions in signal space are bounded by hypersurfaces with hyperplane asymptotes; these asymptotes themselves bound the decision regions of a demodulator which, in several examples, is shown to be virtually optimal. In many cases, the necessary condition for demodulator optimality is also sufficient, but a counter example to its general sufficiency is given.
Optimizing model: insemination, replacement, seasonal production, and cash flow.
DeLorenzo, M A; Spreen, T H; Bryan, G R; Beede, D K; Van Arendonk, J A
1992-03-01
Dynamic programming to solve the Markov decision process problem of optimal insemination and replacement decisions was adapted to address large dairy herd management decision problems in the US. Expected net present values of cow states (151,200) were used to determine the optimal policy. States were specified by class of parity (n = 12), production level (n = 15), month of calving (n = 12), month of lactation (n = 16), and days open (n = 7). Methodology optimized decisions based on net present value of an individual cow and all replacements over a 20-yr decision horizon. Length of decision horizon was chosen to ensure that optimal policies were determined for an infinite planning horizon. Optimization took 286 s of central processing unit time. The final probability transition matrix was determined, in part, by the optimal policy. It was estimated iteratively to determine post-optimization steady state herd structure, milk production, replacement, feed inputs and costs, and resulting cash flow on a calendar month and annual basis if optimal policies were implemented. Implementation of the model included seasonal effects on lactation curve shapes, estrus detection rates, pregnancy rates, milk prices, replacement costs, cull prices, and genetic progress. Other inputs included calf values, values of dietary TDN and CP per kilogram, and discount rate. Stochastic elements included conception (and, thus, subsequent freshening), cow milk production level within herd, and survival. Validation of optimized solutions was by separate simulation model, which implemented policies on a simulated herd and also described herd dynamics during transition to optimized structure.
Organizational Decision Making
1975-08-01
the lack of formal techniques typically used by large organizations, digress on the advantages of formal over informal... optimization ; for example one might do a number of optimization calculations, each time using a different measure of effectiveness as the optimized ...final decision. The next level of computer application involves the use of computerized optimization techniques. Optimization
Method for fixating sludges and soils contaminated with mercury and other heavy metals
Broderick, Thomas E.; Roth, Rachel L.; Carlson, Allan L.
2005-06-28
The invention relates to a method, composition and apparatus for stabilizing mercury and other heavy metals present in a particulate material such that the metals will not leach from the particulate material. The method generally involves the application of a metal reagent, a sulfur-containing compound, and the addition of oxygen to the particulate material, either through agitation, sparging or the addition of an oxygen-containing compound.
Globally optimal trial design for local decision making.
Eckermann, Simon; Willan, Andrew R
2009-02-01
Value of information methods allows decision makers to identify efficient trial design following a principle of maximizing the expected value to decision makers of information from potential trial designs relative to their expected cost. However, in health technology assessment (HTA) the restrictive assumption has been made that, prospectively, there is only expected value of sample information from research commissioned within jurisdiction. This paper extends the framework for optimal trial design and decision making within jurisdiction to allow for optimal trial design across jurisdictions. This is illustrated in identifying an optimal trial design for decision making across the US, the UK and Australia for early versus late external cephalic version for pregnant women presenting in the breech position. The expected net gain from locally optimal trial designs of US$0.72M is shown to increase to US$1.14M with a globally optimal trial design. In general, the proposed method of globally optimal trial design improves on optimal trial design within jurisdictions by: (i) reflecting the global value of non-rival information; (ii) allowing optimal allocation of trial sample across jurisdictions; (iii) avoiding market failure associated with free-rider effects, sub-optimal spreading of fixed costs and heterogeneity of trial information with multiple trials. Copyright (c) 2008 John Wiley & Sons, Ltd.
A framework for designing and analyzing binary decision-making strategies in cellular systems†
Porter, Joshua R.; Andrews, Burton W.; Iglesias, Pablo A.
2015-01-01
Cells make many binary (all-or-nothing) decisions based on noisy signals gathered from their environment and processed through noisy decision-making pathways. Reducing the effect of noise to improve the fidelity of decision-making comes at the expense of increased complexity, creating a tradeoff between performance and metabolic cost. We present a framework based on rate distortion theory, a branch of information theory, to quantify this tradeoff and design binary decision-making strategies that balance low cost and accuracy in optimal ways. With this framework, we show that several observed behaviors of binary decision-making systems, including random strategies, hysteresis, and irreversibility, are optimal in an information-theoretic sense for various situations. This framework can also be used to quantify the goals around which a decision-making system is optimized and to evaluate the optimality of cellular decision-making systems by a fundamental information-theoretic criterion. As proof of concept, we use the framework to quantify the goals of the externally triggered apoptosis pathway. PMID:22370552
Rajavel, Rajkumar; Thangarathinam, Mala
2015-01-01
Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework. PMID:26543899
Rajavel, Rajkumar; Thangarathinam, Mala
2015-01-01
Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework.
Dispositional optimism, self-framing and medical decision-making.
Zhao, Xu; Huang, Chunlei; Li, Xuesong; Zhao, Xin; Peng, Jiaxi
2015-03-01
Self-framing is an important but underinvestigated area in risk communication and behavioural decision-making, especially in medical settings. The present study aimed to investigate the relationship among dispositional optimism, self-frame and decision-making. Participants (N = 500) responded to the Life Orientation Test-Revised and self-framing test of medical decision-making problem. The participants whose scores were higher than the middle value were regarded as highly optimistic individuals. The rest were regarded as low optimistic individuals. The results showed that compared to the high dispositional optimism group, participants from the low dispositional optimism group showed a greater tendency to use negative vocabulary to construct their self-frame, and tended to choose the radiation therapy with high treatment survival rate, but low 5-year survival rate. Based on the current findings, it can be concluded that self-framing effect still exists in medical situation and individual differences in dispositional optimism can influence the processing of information in a framed decision task, as well as risky decision-making. © 2014 International Union of Psychological Science.
People adopt optimal policies in simple decision-making, after practice and guidance.
Evans, Nathan J; Brown, Scott D
2017-04-01
Organisms making repeated simple decisions are faced with a tradeoff between urgent and cautious strategies. While animals can adopt a statistically optimal policy for this tradeoff, findings about human decision-makers have been mixed. Some studies have shown that people can optimize this "speed-accuracy tradeoff", while others have identified a systematic bias towards excessive caution. These issues have driven theoretical development and spurred debate about the nature of human decision-making. We investigated a potential resolution to the debate, based on two factors that routinely differ between human and animal studies of decision-making: the effects of practice, and of longer-term feedback. Our study replicated the finding that most people, by default, are overly cautious. When given both practice and detailed feedback, people moved rapidly towards the optimal policy, with many participants reaching optimality with less than 1 h of practice. Our findings have theoretical implications for cognitive and neural models of simple decision-making, as well as methodological implications.
Optimal multisensory decision-making in a reaction-time task.
Drugowitsch, Jan; DeAngelis, Gregory C; Klier, Eliana M; Angelaki, Dora E; Pouget, Alexandre
2014-06-14
Humans and animals can integrate sensory evidence from various sources to make decisions in a statistically near-optimal manner, provided that the stimulus presentation time is fixed across trials. Little is known about whether optimality is preserved when subjects can choose when to make a decision (reaction-time task), nor when sensory inputs have time-varying reliability. Using a reaction-time version of a visual/vestibular heading discrimination task, we show that behavior is clearly sub-optimal when quantified with traditional optimality metrics that ignore reaction times. We created a computational model that accumulates evidence optimally across both cues and time, and trades off accuracy with decision speed. This model quantitatively explains subjects's choices and reaction times, supporting the hypothesis that subjects do, in fact, accumulate evidence optimally over time and across sensory modalities, even when the reaction time is under the subject's control.
Decision on risk-averse dual-channel supply chain under demand disruption
NASA Astrophysics Data System (ADS)
Yan, Bo; Jin, Zijie; Liu, Yanping; Yang, Jianbo
2018-02-01
We studied dual-channel supply chains using centralized and decentralized decision-making models. We also conducted a comparative analysis of the decisions before and after demand disruption. The study shows that the amount of change in decision-making is a linear function of the amount of demand disruption, and it is independent of the risk-averse coefficient. The optimal sales volume decision of the disturbing supply chain is related to market share and demand disruption in the decentralized decision-making model. The optimal decision is only influenced by demand disruption in the centralized decision-making model. The stability of the sales volume of the two models is related to market share and demand disruption. The optimal system production of the two models shows robustness, but their stable internals are different.
Zhong, Toni; Bagher, Shaghayegh; Jindal, Kunaal; Zeng, Delong; O'Neill, Anne C; MacAdam, Sheina; Butler, Kate; Hofer, Stefan O P; Pusic, Andrea; Metcalfe, Kelly A
2013-12-01
It is not known if optimism influences regret following major reconstructive breast surgery. We examined the relationship between dispositional optimism, major complications and decision regret in patients undergoing microsurgical breast reconstruction. A consecutive series of 290 patients were surveyed. Independent variables were: (1) dispositional optimism and (2) major complications. The primary outcome was Decision Regret. A multivariate regression analysis determined the relationship between the independent variables, confounders and decision regret. Of the 181 respondents, 63% reported no regret after breast reconstruction, 26% had mild regret, and 11% moderate to severe regret. Major complications did not have a significant effect on decision regret, and the impact of dispositional optimism was not significant in Caucasian women. There was a significant effect in non-Caucasian women with less optimism who had significantly higher levels of mild regret 1.36 (CI 1.02-1.97) and moderate to severe regret 1.64 (CI 1.0-93.87). This is the first paper to identify a subgroup of non-Caucasian patients with low dispositional optimism who may be at risk for developing regret after microsurgical breast reconstruction. Possible strategies to ameliorate regret may involve addressing cultural and language barriers, setting realistic expectations, and providing more support during the pre-operative decision-making phase. © 2013 Wiley Periodicals, Inc.
Comparison of residual NAPL source removal techniques in 3D metric scale experiments
NASA Astrophysics Data System (ADS)
Atteia, O.; Jousse, F.; Cohen, G.; Höhener, P.
2017-07-01
This study compared four treatment techniques for the removal of a toluene/n-decane as NAPL (Non Aqueous Phase Liquid) phase mixture in identical 1 cubic meter tanks filled with different kind of sand. These four treatment techniques were: oxidation with persulfate, surfactant washing with Tween80®, sparging with air followed by ozone, and thermal treatment at 80 °C. The sources were made with three lenses of 26 × 26 × 6.5 cm, one having a hydraulic conductivity similar to the whole tank and the two others a value 10 times smaller. The four techniques were studied after conditioning the tanks with tap water during approximately 80 days. The persulfate treatment tests showed average removal of the contaminants but significant flux decrease if density effects are considered. Surfactant flushing did not show a highly significant increase of the flux of toluene but allowed an increased removal rate that could lead to an almost complete removal with longer treatment time. Sparging removed a significant amount but suggests that air was passing through localized gas channels and that the removal was stagnating after removing half of the contamination. Thermal treatment reached 100% removal after the target temperature of 80 °C was kept during more than 10 d. The experiments emphasized the generation of a high-spatial heterogeneity in NAPL content. For all the treatments the overall removal was similar for both n-decane and toluene, suggesting that toluene was removed rapidly and n-decane more slowly in some zones, while no removal existed in other zones. The oxidation and surfactant results were also analyzed for the relation between contaminant fluxes at the outlet and mass removal. For the first time, this approach clearly allowed the differentiation of the treatments. As a conclusion, experiments showed that the most important differences between the tested treatment techniques were not the global mass removal rates but the time required to reach 99% decrease in the contaminant fluxes, which were different for each technique. This paper presents the first comparison of four remediation techniques at the scale of 1 m3 tanks including heterogeneities. Sparging, persulfate and surfactant only remove 50% of the mass, while it is more than 99% for thermal. In terms of flux removal oxidant addition performs better when density effects are used.
Robust optimization modelling with applications to industry and environmental problems
NASA Astrophysics Data System (ADS)
Chaerani, Diah; Dewanto, Stanley P.; Lesmana, Eman
2017-10-01
Robust Optimization (RO) modeling is one of the existing methodology for handling data uncertainty in optimization problem. The main challenge in this RO methodology is how and when we can reformulate the robust counterpart of uncertain problems as a computationally tractable optimization problem or at least approximate the robust counterpart by a tractable problem. Due to its definition the robust counterpart highly depends on how we choose the uncertainty set. As a consequence we can meet this challenge only if this set is chosen in a suitable way. The development on RO grows fast, since 2004, a new approach of RO called Adjustable Robust Optimization (ARO) is introduced to handle uncertain problems when the decision variables must be decided as a ”wait and see” decision variables. Different than the classic Robust Optimization (RO) that models decision variables as ”here and now”. In ARO, the uncertain problems can be considered as a multistage decision problem, thus decision variables involved are now become the wait and see decision variables. In this paper we present the applications of both RO and ARO. We present briefly all results to strengthen the importance of RO and ARO in many real life problems.
The impact of chief executive officer optimism on hospital strategic decision making.
Langabeer, James R; Yao, Emery
2012-01-01
Previous strategic decision making research has focused mostly on the analytical positioning approach, which broadly emphasizes an alignment between rationality and the external environment. In this study, we propose that hospital chief executive optimism (or the general tendency to expect positive future outcomes) will moderate the relationship between comprehensively rational decision-making process and organizational performance. The purpose of this study was to explore the impact that dispositional optimism has on the well-established relationship between rational decision-making processes and organizational performance. Specifically, we hypothesized that optimism will moderate the relationship between the level of rationality and the organization's performance. We further suggest that this relationship will be more negative for those with high, as opposed to low, optimism. We surveyed 168 hospital CEOs and used moderated hierarchical regression methods to statically test our hypothesis. On the basis of a survey study of 168 hospital CEOs, we found evidence of a complex interplay of optimism in the rationality-organizational performance relationship. More specifically, we found that the two-way interactions between optimism and rational decision making were negatively associated with performance and that where optimism was the highest, the rationality-performance relationship was the most negative. Executive optimism was positively associated with organizational performance. We also found that greater perceived environmental turbulence, when interacting with optimism, did not have a significant interaction effect on the rationality-performance relationship. These findings suggest potential for broader participation in strategic processes and the use of organizational development techniques that assess executive disposition and traits for recruitment processes, because CEO optimism influences hospital-level processes. Research implications include incorporating greater use of behavior and cognition constructs to better depict decision-making processes in complex organizations like hospitals.
C-learning: A new classification framework to estimate optimal dynamic treatment regimes.
Zhang, Baqun; Zhang, Min
2017-12-11
A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Danilova, Olga; Semenova, Zinaida
2018-04-01
The objective of this study is a detailed analysis of physical protection systems development for information resources. The optimization theory and decision-making mathematical apparatus is used to formulate correctly and create an algorithm of selection procedure for security systems optimal configuration considering the location of the secured object’s access point and zones. The result of this study is a software implementation scheme of decision-making system for optimal placement of the physical access control system’s elements.
A plastic corticostriatal circuit model of adaptation in perceptual decision making
Hsiao, Pao-Yueh; Lo, Chung-Chuan
2013-01-01
The ability to optimize decisions and adapt them to changing environments is a crucial brain function that increase survivability. Although much has been learned about the neuronal activity in various brain regions that are associated with decision making, and about how the nervous systems may learn to achieve optimization, the underlying neuronal mechanisms of how the nervous systems optimize decision strategies with preference given to speed or accuracy, and how the systems adapt to changes in the environment, remain unclear. Based on extensive empirical observations, we addressed the question by extending a previously described cortico-basal ganglia circuit model of perceptual decisions with the inclusion of a dynamic dopamine (DA) system that modulates spike-timing dependent plasticity (STDP). We found that, once an optimal model setting that maximized the reward rate was selected, the same setting automatically optimized decisions across different task environments through dynamic balancing between the facilitating and depressing components of the DA dynamics. Interestingly, other model parameters were also optimal if we considered the reward rate that was weighted by the subject's preferences for speed or accuracy. Specifically, the circuit model favored speed if we increased the phasic DA response to the reward prediction error, whereas the model favored accuracy if we reduced the tonic DA activity or the phasic DA responses to the estimated reward probability. The proposed model provides insight into the roles of different components of DA responses in decision adaptation and optimization in a changing environment. PMID:24339814
Research on Bidding Decision-making of International Public-Private Partnership Projects
NASA Astrophysics Data System (ADS)
Hu, Zhen Yu; Zhang, Shui Bo; Liu, Xin Yan
2018-06-01
In order to select the optimal quasi-bidding project for an investment enterprise, a bidding decision-making model for international PPP projects was established in this paper. Firstly, the literature frequency statistics method was adopted to screen out the bidding decision-making indexes, and accordingly the bidding decision-making index system for international PPP projects was constructed. Then, the group decision-making characteristic root method, the entropy weight method, and the optimization model based on least square method were used to set the decision-making index weights. The optimal quasi-bidding project was thus determined by calculating the consistent effect measure of each decision-making index value and the comprehensive effect measure of each quasi-bidding project. Finally, the bidding decision-making model for international PPP projects was further illustrated by a hypothetical case. This model can effectively serve as a theoretical foundation and technical support for the bidding decision-making of international PPP projects.
Optimal allocation model of construction land based on two-level system optimization theory
NASA Astrophysics Data System (ADS)
Liu, Min; Liu, Yanfang; Xia, Yuping; Lei, Qihong
2007-06-01
The allocation of construction land is an important task in land-use planning. Whether implementation of planning decisions is a success or not, usually depends on a reasonable and scientific distribution method. Considering the constitution of land-use planning system and planning process in China, multiple levels and multiple objective decision problems is its essence. Also, planning quantity decomposition is a two-level system optimization problem and an optimal resource allocation decision problem between a decision-maker in the topper and a number of parallel decision-makers in the lower. According the characteristics of the decision-making process of two-level decision-making system, this paper develops an optimal allocation model of construction land based on two-level linear planning. In order to verify the rationality and the validity of our model, Baoan district of Shenzhen City has been taken as a test case. Under the assistance of the allocation model, construction land is allocated to ten townships of Baoan district. The result obtained from our model is compared to that of traditional method, and results show that our model is reasonable and usable. In the end, the paper points out the shortcomings of the model and further research directions.
NASA Astrophysics Data System (ADS)
Bascetin, A.
2007-04-01
The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.
Simulation-based planning for theater air warfare
NASA Astrophysics Data System (ADS)
Popken, Douglas A.; Cox, Louis A., Jr.
2004-08-01
Planning for Theatre Air Warfare can be represented as a hierarchy of decisions. At the top level, surviving airframes must be assigned to roles (e.g., Air Defense, Counter Air, Close Air Support, and AAF Suppression) in each time period in response to changing enemy air defense capabilities, remaining targets, and roles of opposing aircraft. At the middle level, aircraft are allocated to specific targets to support their assigned roles. At the lowest level, routing and engagement decisions are made for individual missions. The decisions at each level form a set of time-sequenced Courses of Action taken by opposing forces. This paper introduces a set of simulation-based optimization heuristics operating within this planning hierarchy to optimize allocations of aircraft. The algorithms estimate distributions for stochastic outcomes of the pairs of Red/Blue decisions. Rather than using traditional stochastic dynamic programming to determine optimal strategies, we use an innovative combination of heuristics, simulation-optimization, and mathematical programming. Blue decisions are guided by a stochastic hill-climbing search algorithm while Red decisions are found by optimizing over a continuous representation of the decision space. Stochastic outcomes are then provided by fast, Lanchester-type attrition simulations. This paper summarizes preliminary results from top and middle level models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Cul, G.D.; Toth, L.M.; Bond, W.D.
The concern that there might be some physical-chemical process which would lead to a separation of the poisoning actinides ({sup 232}Th, {sup 238}U) from the fissionable ones ({sup 239}Pu, {sup 235}U) in waste storage tanks at Oak Ridge National Laboratory has led to a paper study of potential separations processes involving these elements. At the relatively high pH values (>8), the actinides are normally present as precipitated hydroxides. Mechanisms that might then selectively dissolve and reprecipitate the actinides through thermal processes or additions of reagents were addressed. Although redox reactions, pH changes, and complexation reactions were all considered, only themore » last type was regarded as having any significant probability. Furthermore, only carbonate accumulation, through continual unmonitored air sparging of the tank contents, could credibly account for gross transport and separation of the actinide components. From the large amount of equilibrium data in the literature, concentration differences in Th, U, and Pu due to carbonate complexation as a function of pH have been presented to demonstrate this phenomenon. While the carbonate effect does represent a potential separations process, control of long-term air sparging and solution pH, accompanied by routine determinations of soluble carbonate concentration, should ensure that this separations process does not occur.« less
Assessment of bacterial and archaeal community structure in Swine wastewater treatment processes.
Da Silva, Marcio Luis Busi; Cantão, Mauricio Egídio; Mezzari, Melissa Paola; Ma, Jie; Nossa, Carlos Wolfgang
2015-07-01
Microbial communities from two field-scale swine wastewater treatment plants (WWTPs) were assessed by pyrosequencing analyses of bacterial and archaeal 16S ribosomal DNA (rDNA) fragments. Effluent samples from secondary (anaerobic covered lagoons and upflow anaerobic sludge blanket [UASB]) and tertiary treatment systems (open-pond natural attenuation lagoon and air-sparged nitrification-denitrification tank followed by alkaline phosphorus precipitation process) were analyzed. A total of 56,807 and 48,859 high-quality reads were obtained from bacterial and archaeal libraries, respectively. Dominant bacterial communities were associated with the phylum Firmicutes, Bacteroidetes, Proteobacteria, or Actinobacteria. Bacteria and archaea diversity were highest in UASB effluent sample. Escherichia, Lactobacillus, Bacteroides, and/or Prevotella were used as indicators of putative pathogen reduction throughout the WWTPs. Satisfactory pathogen reduction was observed after the open-pond natural attenuation lagoon but not after the air-sparged nitrification/denitrification followed by alkaline phosphorus precipitation treatment processes. Among the archaeal communities, 80% of the reads was related to hydrogeno-trophic methanogens Methanospirillum. Enrichment of hydrogenotrophic methanogens detected in effluent samples from the anaerobic covered lagoons and UASB suggested that CO2 reduction with H2 was the dominant methanogenic pathway in these systems. Overall, the results served to improve our current understanding of major microbial communities' changes downgradient from the pen and throughout swine WWTP as a result of different treatment processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson, K.E.; Dickhut, R.M.
1997-03-01
Gas sparging, semipermeable-membrane devices (SPMDs), and filtration with sorption of dissolved polycyclic aromatic hydrocarbons (PAHs) to XAD-2 resin were evaluated for determining the concentrations of freely dissolved PAHs in estuarine waters of southern Chesapeake Bay at sites ranging from rural to urban and highly industrialized. Gas sparging had significant sampling artifacts due to particle scavenging by rising bubbles, and SPMDs were kinetically limited for four-ring and larger PAHs relative to short-term temporal changes in water concentrations. Filtration with sorption of the dissolved contaminant fraction to XAD-2 resin was found to be the most accurate and feasible method for determining concentrationsmore » of freely dissolved PAHs in estuarine water. Concentrations and distribution coefficients of dissolved and particulate PAHs were measured using the filtration/XAD-2 method. Concentrations of PAHs in surface waters of southern Chesapeake Bay were higher than those reported for the northern bay; concentrations in the Elizabeth River were elevated relative to all other sites. A gradient for particulate PAHs was observed from urban to remote sites. No seasonal trends were observed in dissolved or particle-bound PAH fractions at any site. Distributions of dissolved and particulate PAHs in surface waters of the Chesapeake Bay are near equilibrium at all locations and during all seasons.« less
2018-01-01
Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization. PMID:29377956
Zu, Xianghuan; Yang, Chuanlei; Wang, Hechun; Wang, Yinyan
2018-01-01
Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization.
A detailed comparison of optimality and simplicity in perceptual decision-making
Shen, Shan; Ma, Wei Ji
2017-01-01
Two prominent ideas in the study of decision-making have been that organisms behave near-optimally, and that they use simple heuristic rules. These principles might be operating in different types of tasks, but this possibility cannot be fully investigated without a direct, rigorous comparison within a single task. Such a comparison was lacking in most previous studies, because a) the optimal decision rule was simple; b) no simple suboptimal rules were considered; c) it was unclear what was optimal, or d) a simple rule could closely approximate the optimal rule. Here, we used a perceptual decision-making task in which the optimal decision rule is well-defined and complex, and makes qualitatively distinct predictions from many simple suboptimal rules. We find that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one. Moreover, we found evidence that the optimal model is close to the true model: first, the better the trial-to-trial predictions of a suboptimal model agree with those of the optimal model, the better that suboptimal model fits; second, our estimate of the Kullback-Leibler divergence between the optimal model and the true model is not significantly different from zero. When observers receive no feedback, the optimal model still describes behavior best, suggesting that sensory uncertainty is implicitly represented and taken into account. Beyond the task and models studied here, our results have implications for best practices of model comparison. PMID:27177259
A Compensatory Approach to Optimal Selection with Mastery Scores. Research Report 94-2.
ERIC Educational Resources Information Center
van der Linden, Wim J.; Vos, Hans J.
This paper presents some Bayesian theories of simultaneous optimization of decision rules for test-based decisions. Simultaneous decision making arises when an institution has to make a series of selection, placement, or mastery decisions with respect to subjects from a population. An obvious example is the use of individualized instruction in…
Modelling decision-making by pilots
NASA Technical Reports Server (NTRS)
Patrick, Nicholas J. M.
1993-01-01
Our scientific goal is to understand the process of human decision-making. Specifically, a model of human decision-making in piloting modern commercial aircraft which prescribes optimal behavior, and against which we can measure human sub-optimality is sought. This model should help us understand such diverse aspects of piloting as strategic decision-making, and the implicit decisions involved in attention allocation. Our engineering goal is to provide design specifications for (1) better computer-based decision-aids, and (2) better training programs for the human pilot (or human decision-maker, DM).
Research on the decision-making model of land-use spatial optimization
NASA Astrophysics Data System (ADS)
He, Jianhua; Yu, Yan; Liu, Yanfang; Liang, Fei; Cai, Yuqiu
2009-10-01
Using the optimization result of landscape pattern and land use structure optimization as constraints of CA simulation results, a decision-making model of land use spatial optimization is established coupled the landscape pattern model with cellular automata to realize the land use quantitative and spatial optimization simultaneously. And Huangpi district is taken as a case study to verify the rationality of the model.
Geessink, Noralie H; Schoon, Yvonne; van Herk, Hanneke C P; van Goor, Harry; Olde Rikkert, Marcel G M
2017-03-01
To identify key elements of optimal treatment decision-making for surgeons and older patients with colorectal (CRC) or pancreatic cancer (PC). Six focus groups with different participants were performed: three with older CRC/PC patients and relatives, and three with physicians. Supplementary in-depth interviews were conducted in another seven patients. Framework analysis was used to identify key elements in decision-making. 23 physicians, 22 patients and 14 relatives participated. Three interacting components were revealed: preconditions, content and facilitators of decision-making. To provide optimal information about treatments' impact on an older patient's daily life, physicians should obtain an overall picture and take into account patients' frailty. Depending on patients' preferences and capacities, dividing decision-making into more sessions will be helpful and simultaneously emphasize patients' own responsibility. GPs may have a valuable contribution because of their background knowledge and supportive role. Stakeholders identified several crucial elements in the complex surgical decision-making of older CRC/PC patients. Structured qualitative research may also be of great help in optimizing other treatment directed decision-making processes. Surgeons should be trained in examining preconditions and useful facilitators in decision-making in older CRC/PC patients to optimize its content and to improve the quality of shared care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
On optimal soft-decision demodulation. [in digital communication system
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1976-01-01
A necessary condition is derived for optimal J-ary coherent demodulation of M-ary (M greater than 2) signals. Optimality is defined as maximality of the symmetric cutoff rate of the resulting discrete memoryless channel. Using a counterexample, it is shown that the condition derived is generally not sufficient for optimality. This condition is employed as the basis for an iterative optimization method to find the optimal demodulator decision regions from an initial 'good guess'. In general, these regions are found to be bounded by hyperplanes in likelihood space; the corresponding regions in signal space are found to have hyperplane asymptotes for the important case of additive white Gaussian noise. Some examples are presented, showing that the regions in signal space bounded by these asymptotic hyperplanes define demodulator decision regions that are virtually optimal.
Liu, Shan; Brandeau, Margaret L; Goldhaber-Fiebert, Jeremy D
2017-03-01
How long should a patient with a treatable chronic disease wait for more effective treatments before accepting the best available treatment? We develop a framework to guide optimal treatment decisions for a deteriorating chronic disease when treatment technologies are improving over time. We formulate an optimal stopping problem using a discrete-time, finite-horizon Markov decision process. The goal is to maximize a patient's quality-adjusted life expectancy. We derive structural properties of the model and analytically solve a three-period treatment decision problem. We illustrate the model with the example of treatment for chronic hepatitis C virus (HCV). Chronic HCV affects 3-4 million Americans and has been historically difficult to treat, but increasingly effective treatments have been commercialized in the past few years. We show that the optimal treatment decision is more likely to be to accept currently available treatment-despite expectations for future treatment improvement-for patients who have high-risk history, who are older, or who have more comorbidities. Insights from this study can guide HCV treatment decisions for individual patients. More broadly, our model can guide treatment decisions for curable chronic diseases by finding the optimal treatment policy for individual patients in a heterogeneous population.
Liu, Shan; Goldhaber-Fiebert, Jeremy D.; Brandeau, Margaret L.
2015-01-01
How long should a patient with a treatable chronic disease wait for more effective treatments before accepting the best available treatment? We develop a framework to guide optimal treatment decisions for a deteriorating chronic disease when treatment technologies are improving over time. We formulate an optimal stopping problem using a discrete-time, finite-horizon Markov decision process. The goal is to maximize a patient’s quality-adjusted life expectancy. We derive structural properties of the model and analytically solve a three-period treatment decision problem. We illustrate the model with the example of treatment for chronic hepatitis C virus (HCV). Chronic HCV affects 3–4 million Americans and has been historically difficult to treat, but increasingly effective treatments have been commercialized in the past few years. We show that the optimal treatment decision is more likely to be to accept currently available treatment—despite expectations for future treatment improvement—for patients who have high-risk history, who are older, or who have more comorbidities. Insights from this study can guide HCV treatment decisions for individual patients. More broadly, our model can guide treatment decisions for curable chronic diseases by finding the optimal treatment policy for individual patients in a heterogeneous population. PMID:26188961
NASA Astrophysics Data System (ADS)
Subagadis, Y. H.; Schütze, N.; Grundmann, J.
2014-09-01
The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.
Marine Steam Condenser Design Optimization.
1983-12-01
to make design decisions to obtain a feasible design. CONNIN, as do most optimizers, requires complete control in determining all iterative design...neutralize all the places where such design decisions are made. By removing the ability for CONDIP to make any design decisions it became totally passive...dependent on CONNIN for design decisions , does not have that capability. Pemeabering that CONHIN requires a complete once-through analysis in order to
Decision Through Optimism: The North Peruvian Pipeline.
1987-05-01
corporations. Another factor, optimism, is more intangible, but influenced the decision strongly. This paper discusses the need for, construction of...decision, the construction effort, and financing to accomplish this endeavor. Finally, it notes Peru’s oil situation after completion of the pipeline and...decision strongly. This paper discusses the need for, construction of, and results of building the Northern Peru Oil Pipeline. The paper reviews the
Biased and less sensitive: A gamified approach to delay discounting in heroin addiction.
Scherbaum, Stefan; Haber, Paul; Morley, Kirsten; Underhill, Dylan; Moustafa, Ahmed A
2018-03-01
People with addiction will continue to use drugs despite adverse long-term consequences. We hypothesized (a) that this deficit persists during substitution treatment, and (b) that this deficit might be related not only to a desire for immediate gratification, but also to a lower sensitivity for optimal decision making. We investigated how individuals with a history of heroin addiction perform (compared to healthy controls) in a virtual reality delay discounting task. This novel task adds to established measures of delay discounting an assessment of the optimality of decisions, especially in how far decisions are influenced by a general choice bias and/or a reduced sensitivity to the relative value of the two alternative rewards. We used this measure of optimality to apply diffusion model analysis to the behavioral data to analyze the interaction between decision optimality and reaction time. The addiction group consisted of 25 patients with a history of heroin dependency currently participating in a methadone maintenance program; the control group consisted of 25 healthy participants with no history of substance abuse, who were recruited from the Western Sydney community. The patient group demonstrated greater levels of delay discounting compared to the control group, which is broadly in line with previous observations. Diffusion model analysis yielded a reduced sensitivity for the optimality of a decision in the patient group compared to the control group. This reduced sensitivity was reflected in lower rates of information accumulation and higher decision criteria. Increased discounting in individuals with heroin addiction is related not only to a generally increased bias to immediate gratification, but also to reduced sensitivity for the optimality of a decision. This finding is in line with other findings about the sensitivity of addicts in distinguishing optimal from nonoptimal choice options.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Wei; Reddy, T. A.; Gurian, Patrick
2007-01-31
A companion paper to Jiang and Reddy that presents a general and computationally efficient methodology for dyanmic scheduling and optimal control of complex primary HVAC&R plants using a deterministic engineering optimization approach.
Leveraging human decision making through the optimal management of centralized resources
NASA Astrophysics Data System (ADS)
Hyden, Paul; McGrath, Richard G.
2016-05-01
Combining results from mixed integer optimization, stochastic modeling and queuing theory, we will advance the interdisciplinary problem of efficiently and effectively allocating centrally managed resources. Academia currently fails to address this, as the esoteric demands of each of these large research areas limits work across traditional boundaries. The commercial space does not currently address these challenges due to the absence of a profit metric. By constructing algorithms that explicitly use inputs across boundaries, we are able to incorporate the advantages of using human decision makers. Key improvements in the underlying algorithms are made possible by aligning decision maker goals with the feedback loops introduced between the core optimization step and the modeling of the overall stochastic process of supply and demand. A key observation is that human decision-makers must be explicitly included in the analysis for these approaches to be ultimately successful. Transformative access gives warfighters and mission owners greater understanding of global needs and allows for relationships to guide optimal resource allocation decisions. Mastery of demand processes and optimization bottlenecks reveals long term maximum marginal utility gaps in capabilities.
NASA Astrophysics Data System (ADS)
Kaune, Alexander; López, Patricia; Werner, Micha; de Fraiture, Charlotte
2017-04-01
Hydrological information on water availability and demand is vital for sound water allocation decisions in irrigation districts, particularly in times of water scarcity. However, sub-optimal water allocation decisions are often taken with incomplete hydrological information, which may lead to agricultural production loss. In this study we evaluate the benefit of additional hydrological information from earth observations and reanalysis data in supporting decisions in irrigation districts. Current water allocation decisions were emulated through heuristic operational rules for water scarce and water abundant conditions in the selected irrigation districts. The Dynamic Water Balance Model based on the Budyko framework was forced with precipitation datasets from interpolated ground measurements, remote sensing and reanalysis data, to determine the water availability for irrigation. Irrigation demands were estimated based on estimates of potential evapotranspiration and coefficient for crops grown, adjusted with the interpolated precipitation data. Decisions made using both current and additional hydrological information were evaluated through the rate at which sub-optimal decisions were made. The decisions made using an amended set of decision rules that benefit from additional information on demand in the districts were also evaluated. Results show that sub-optimal decisions can be reduced in the planning phase through improved estimates of water availability. Where there are reliable observations of water availability through gauging stations, the benefit of the improved precipitation data is found in the improved estimates of demand, equally leading to a reduction of sub-optimal decisions.
Remediation of Chlorinated Solvent Plumes Using In-Situ Air Sparging—A 2-D Laboratory Study
Adams, Jeffrey A.; Reddy, Krishna R.; Tekola, Lue
2011-01-01
In-situ air sparging has evolved as an innovative technique for soil and groundwater remediation impacted with volatile organic compounds (VOCs), including chlorinated solvents. These may exist as non-aqueous phase liquid (NAPL) or dissolved in groundwater. This study assessed: (1) how air injection rate affects the mass removal of dissolved phase contamination, (2) the effect of induced groundwater flow on mass removal and air distribution during air injection, and (3) the effect of initial contaminant concentration on mass removal. Dissolved-phase chlorinated solvents can be effectively removed through the use of air sparging; however, rapid initial rates of contaminant removal are followed by a protracted period of lower removal rates, or a tailing effect. As the air flow rate increases, the rate of contaminant removal also increases, especially during the initial stages of air injection. Increased air injection rates will increase the density of air channel formation, resulting in a larger interfacial mass transfer area through which the dissolved contaminant can partition into the vapor phase. In cases of groundwater flow, increased rates of air injection lessened observed downward contaminant migration effect. The air channel network and increased air saturation reduced relative hydraulic conductivity, resulting in reduced groundwater flow and subsequent downgradient contaminant migration. Finally, when a higher initial TCE concentration was present, a slightly higher mass removal rate was observed due to higher volatilization-induced concentration gradients and subsequent diffusive flux. Once concentrations are reduced, a similar tailing effect occurs. PMID:21776228
Hou, Meifang; Chu, Yaofei; Li, Xiang; Wang, Huijiao; Yao, Weikun; Yu, Gang; Murayama, Seiichi; Wang, Yujue
2016-12-05
This study compares the degradation of diethyl phthalate (DEP) by the electro-peroxone (E-peroxone) process with three different carbon-based cathodes, namely, carbon-polytetrafluorethylene (carbon-PTFE), carbon felt, and reticulated vitreous carbon (RVC). Results show that the three cathodes had different electrocatalytic activity for converting sparged O2 to H2O2, which increased in order of carbon felt, RVC, and carbon-PTFE. The in-situ generated H2O2 then reacts with sparged O3 to yield OH, which can in turn oxidize ozone-refractory DEP toward complete mineralization. In general, satisfactory total organic carbon removal yields (76.4-91.8%) could be obtained after 60min of the E-peroxone treatment with the three carbon-based cathodes, and the highest yield was obtained with the carbon-PTFE cathode due to its highest activity for H2O2 generation. In addition, the carbon-PTFE and carbon felt cathodes exhibited excellent stability over six cycles of the E-peroxone treatment of DEP solutions. Based on the intermediates (e.g., monoethyl phthalate, phthalic acid, phenolics, and carboxylic acids) identified by HPLC-UV, plausible reaction pathways were proposed for DEP mineralization by the E-peroxone process. The results of this study indicate that carbon-based cathodes generally have good electrocatalytic activity and stability for application in extended E-peroxone operations to effectively remove phthalates from water. Copyright © 2015 Elsevier B.V. All rights reserved.
A Decision Support System for Solving Multiple Criteria Optimization Problems
ERIC Educational Resources Information Center
Filatovas, Ernestas; Kurasova, Olga
2011-01-01
In this paper, multiple criteria optimization has been investigated. A new decision support system (DSS) has been developed for interactive solving of multiple criteria optimization problems (MOPs). The weighted-sum (WS) approach is implemented to solve the MOPs. The MOPs are solved by selecting different weight coefficient values for the criteria…
Incentives for Optimal Multi-level Allocation of HIV Prevention Resources
Malvankar, Monali M.; Zaric, Gregory S.
2013-01-01
HIV/AIDS prevention funds are often allocated at multiple levels of decision-making. Optimal allocation of HIV prevention funds maximizes the number of HIV infections averted. However, decision makers often allocate using simple heuristics such as proportional allocation. We evaluate the impact of using incentives to encourage optimal allocation in a two-level decision-making process. We model an incentive based decision-making process consisting of an upper-level decision maker allocating funds to a single lower-level decision maker who then distributes funds to local programs. We assume that the lower-level utility function is linear in the amount of the budget received from the upper-level, the fraction of funds reserved for proportional allocation, and the number of infections averted. We assume that the upper level objective is to maximize the number of infections averted. We illustrate with an example using data from California, U.S. PMID:23766551
NASA Astrophysics Data System (ADS)
Song, Yanpo; Peng, Xiaoqi; Tang, Ying; Hu, Zhikun
2013-07-01
To improve the operation level of copper converter, the approach to optimal decision making modeling for coppermatte converting process based on data mining is studied: in view of the characteristics of the process data, such as containing noise, small sample size and so on, a new robust improved ANN (artificial neural network) modeling method is proposed; taking into account the application purpose of decision making model, three new evaluation indexes named support, confidence and relative confidence are proposed; using real production data and the methods mentioned above, optimal decision making model for blowing time of S1 period (the 1st slag producing period) are developed. Simulation results show that this model can significantly improve the converting quality of S1 period, increase the optimal probability from about 70% to about 85%.
Do the right thing: the assumption of optimality in lay decision theory and causal judgment.
Johnson, Samuel G B; Rips, Lance J
2015-03-01
Human decision-making is often characterized as irrational and suboptimal. Here we ask whether people nonetheless assume optimal choices from other decision-makers: Are people intuitive classical economists? In seven experiments, we show that an agent's perceived optimality in choice affects attributions of responsibility and causation for the outcomes of their actions. We use this paradigm to examine several issues in lay decision theory, including how responsibility judgments depend on the efficacy of the agent's actual and counterfactual choices (Experiments 1-3), individual differences in responsibility assignment strategies (Experiment 4), and how people conceptualize decisions involving trade-offs among multiple goals (Experiments 5-6). We also find similar results using everyday decision problems (Experiment 7). Taken together, these experiments show that attributions of responsibility depend not only on what decision-makers do, but also on the quality of the options they choose not to take. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D.
2009-01-01
The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response…
A modeling framework for optimal long-term care insurance purchase decisions in retirement planning.
Gupta, Aparna; Li, Lepeng
2004-05-01
The level of need and costs of obtaining long-term care (LTC) during retired life require that planning for it is an integral part of retirement planning. In this paper, we divide retirement planning into two phases, pre-retirement and post-retirement. On the basis of four interrelated models for health evolution, wealth evolution, LTC insurance premium and coverage, and LTC cost structure, a framework for optimal LTC insurance purchase decisions in the pre-retirement phase is developed. Optimal decisions are obtained by developing a trade-off between post-retirement LTC costs and LTC insurance premiums and coverage. Two-way branching models are used to model stochastic health events and asset returns. The resulting optimization problem is formulated as a dynamic programming problem. We compare the optimal decision under two insurance purchase scenarios: one assumes that insurance is purchased for good and other assumes it may be purchased, relinquished and re-purchased. Sensitivity analysis is performed for the retirement age.
Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making.
Aitchison, Laurence; Bang, Dan; Bahrami, Bahador; Latham, Peter E
2015-10-01
Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
Hypergol Maintenance Facility Hazardous Waste South Staging Areas, SWMU 070
NASA Technical Reports Server (NTRS)
Wilson, Deborah M.; Miller, Ralinda R.
2015-01-01
The purpose of this CMI Year 9 AGWMR is to present the actions taken and results obtained during the ninth year of implementation of Corrective Measures (CM) at HMF. Groundwater monitoring activities were conducted in accordance with the CMI Work Plan (Tetra Tech, 2005a) and CMI Site-Specific Safety and Health Plan (Tetra Tech, 2005b). Groundwater monitoring activities detailed in this Year 9 report include pre-startup sampling in February 2014(prior to restarting the air sparging system) and quarterly performance monitoring in March, July, and September 2014.
Can Subjects be Guided to Optimal Decisions The Use of a Real-Time Training Intervention Model
2016-06-01
execution of the task and may then be analyzed to determine if there is correlation between designated factors (scores, proportion of time in each...state with their decision performance in real time could allow training systems to be designed to tailor training to the individual decision maker...release; distribution is unlimited CAN SUBJECTS BE GUIDED TO OPTIMAL DECISIONS? THE USE OF A REAL- TIME TRAINING INTERVENTION MODEL by Travis D
Henshall, Chris; Schuller, Tara; Mardhani-Bayne, Logan
2012-07-01
Health systems face rising patient expectations and economic pressures; decision makers seek to enhance efficiency to improve access to appropriate care. There is international interest in the role of HTA to support decisions to optimize use of established technologies, particularly in "disinvesting" from low-benefit uses. This study summarizes main points from an HTAi Policy Forum meeting on this topic, drawing on presentations, discussions among attendees, and an advance background paper. Optimization involves assessment or re-assessment of a technology, a decision on optimal use, and decision implementation. This may occur within a routine process to improve safety and quality and create "headroom" for new technologies, or ad hoc in response to financial constraints. The term "disinvestment" is not always helpful in describing these processes. HTA contributes to optimization, but there is scope to increase its role in many systems. Stakeholders may have strong views on access to technology, and stakeholder involvement is essential. Optimization faces challenges including loss aversion and entitlement, stakeholder inertia and entrenchment, heterogeneity in patient outcomes, and the need to demonstrate convincingly absence of benefit. While basic HTA principles remain applicable, methodological developments are needed better to support optimization. These include mechanisms for candidate technology identification and prioritization, enhanced collection and analysis of routine data, and clinician engagement. To maximize value to decision makers, HTA should consider implementation strategies and barriers. Improving optimization processes calls for a coordinated approach, and actions are identified for system leaders, HTA and other health organizations, and industry.
ERIC Educational Resources Information Center
Vos, Hans J.
An approach to simultaneous optimization of assignments of subjects to treatments followed by an end-of-mastery test is presented using the framework of Bayesian decision theory. Focus is on demonstrating how rules for the simultaneous optimization of sequences of decisions can be found. The main advantages of the simultaneous approach, compared…
Investigation of effective decision criteria for multiobjective optimization in IMRT.
Holdsworth, Clay; Stewart, Robert D; Kim, Minsun; Liao, Jay; Phillips, Mark H
2011-06-01
To investigate how using different sets of decision criteria impacts the quality of intensity modulated radiation therapy (IMRT) plans obtained by multiobjective optimization. A multiobjective optimization evolutionary algorithm (MOEA) was used to produce sets of IMRT plans. The MOEA consisted of two interacting algorithms: (i) a deterministic inverse planning optimization of beamlet intensities that minimizes a weighted sum of quadratic penalty objectives to generate IMRT plans and (ii) an evolutionary algorithm that selects the superior IMRT plans using decision criteria and uses those plans to determine the new weights and penalty objectives of each new plan. Plans resulting from the deterministic algorithm were evaluated by the evolutionary algorithm using a set of decision criteria for both targets and organs at risk (OARs). Decision criteria used included variation in the target dose distribution, mean dose, maximum dose, generalized equivalent uniform dose (gEUD), an equivalent uniform dose (EUD(alpha,beta) formula derived from the linear-quadratic survival model, and points on dose volume histograms (DVHs). In order to quantatively compare results from trials using different decision criteria, a neutral set of comparison metrics was used. For each set of decision criteria investigated, IMRT plans were calculated for four different cases: two simple prostate cases, one complex prostate Case, and one complex head and neck Case. When smaller numbers of decision criteria, more descriptive decision criteria, or less anti-correlated decision criteria were used to characterize plan quality during multiobjective optimization, dose to OARs and target dose variation were reduced in the final population of plans. Mean OAR dose and gEUD (a = 4) decision criteria were comparable. Using maximum dose decision criteria for OARs near targets resulted in inferior populations that focused solely on low target variance at the expense of high OAR dose. Target dose range, (D(max) - D(min)), decision criteria were found to be most effective for keeping targets uniform. Using target gEUD decision criteria resulted in much lower OAR doses but much higher target dose variation. EUD(alpha,beta) based decision criteria focused on a region of plan space that was a compromise between target and OAR objectives. None of these target decision criteria dominated plans using other criteria, but only focused on approaching a different area of the Pareto front. The choice of decision criteria implemented in the MOEA had a significant impact on the region explored and the rate of convergence toward the Pareto front. When more decision criteria, anticorrelated decision criteria, or decision criteria with insufficient information were implemented, inferior populations are resulted. When more informative decision criteria were used, such as gEUD, EUD(alpha,beta), target dose range, and mean dose, MOEA optimizations focused on approaching different regions of the Pareto front, but did not dominate each other. Using simple OAR decision criteria and target EUD(alpha,beta) decision criteria demonstrated the potential to generate IMRT plans that significantly reduce dose to OARs while achieving the same or better tumor control when clinical requirements on target dose variance can be met or relaxed.
The option value of delay in health technology assessment.
Eckermann, Simon; Willan, Andrew R
2008-01-01
Processes of health technology assessment (HTA) inform decisions under uncertainty about whether to invest in new technologies based on evidence of incremental effects, incremental cost, and incremental net benefit monetary (INMB). An option value to delaying such decisions to wait for further evidence is suggested in the usual case of interest, in which the prior distribution of INMB is positive but uncertain. of estimating the option value of delaying decisions to invest have previously been developed when investments are irreversible with an uncertain payoff over time and information is assumed fixed. However, in HTA decision uncertainty relates to information (evidence) on the distribution of INMB. This article demonstrates that the option value of delaying decisions to allow collection of further evidence can be estimated as the expected value of sample of information (EVSI). For irreversible decisions, delay and trial (DT) is demonstrated to be preferred to adopt and no trial (AN) when the EVSI exceeds expected costs of information, including expected opportunity costs of not treating patients with the new therapy. For reversible decisions, adopt and trial (AT) becomes a potentially optimal strategy, but costs of reversal are shown to reduce the EVSI of this strategy due to both a lower probability of reversal being optimal and lower payoffs when reversal is optimal. Hence, decision makers are generally shown to face joint research and reimbursement decisions (AN, DT and AT), with the optimal choice dependent on costs of reversal as well as opportunity costs of delay and the distribution of prior INMB.
ERIC Educational Resources Information Center
Vos, Hans J.
As part of a project formulating optimal rules for decision making in computer assisted instructional systems in which the computer is used as a decision support tool, an approach that simultaneously optimizes classification of students into two treatments, each followed by a mastery decision, is presented using the framework of Bayesian decision…
Decision Aids for Naval Air ASW
1980-03-15
Algorithm for Zone Optimization Investigation) NADC Developing Sonobuoy Pattern for Air ASW Search DAISY (Decision Aiding Information System) Wharton...sion making behavior. 0 Artificial intelligence sequential pattern recognition algorithm for reconstructing the decision maker’s utility functions. 0...display presenting the uncertainty area of the target. 3.1.5 Algorithm for Zone Optimization Investigation (AZOI) -- Naval Air Development Center 0 A
Issue a Boil-Water Advisory or Wait for Definitive Information? A Decision Analysis
Wagner, Michael M.; Wallstrom, Garrick L.; Onisko, Agnieszka
2005-01-01
Objective Study the decision to issue a boil-water advisory in response to a spike in sales of diarrhea remedies or wait 72 hours for the results of definitive testing of water and people. Methods Decision analysis. Results In the base-case analysis, the optimal decision is test-and-wait. If the cost of issuing a boil-water advisory is less than 13.92 cents per person per day, the optimal decision is to issue the boil-water advisory immediately. Conclusions Decisions based on surveillance data that are suggestive but not conclusive about the existence of a disease outbreak can be modeled. PMID:16779145
Optimal decision making on the basis of evidence represented in spike trains.
Zhang, Jiaxiang; Bogacz, Rafal
2010-05-01
Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.
Mokeddem, Diab; Khellaf, Abdelhafid
2009-01-01
Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537
NASA Astrophysics Data System (ADS)
Clemens, Joshua William
Game theory has application across multiple fields, spanning from economic strategy to optimal control of an aircraft and missile on an intercept trajectory. The idea of game theory is fascinating in that we can actually mathematically model real-world scenarios and determine optimal decision making. It may not always be easy to mathematically model certain real-world scenarios, nonetheless, game theory gives us an appreciation for the complexity involved in decision making. This complexity is especially apparent when the players involved have access to different information upon which to base their decision making (a nonclassical information pattern). Here we will focus on the class of adversarial two-player games (sometimes referred to as pursuit-evasion games) with nonclassical information pattern. We present a two-sided (simultaneous) optimization solution method for the two-player linear quadratic Gaussian (LQG) multistage game. This direct solution method allows for further interpretation of each player's decision making (strategy) as compared to previously used formal solution methods. In addition to the optimal control strategies, we present a saddle point proof and we derive an expression for the optimal performance index value. We provide some numerical results in order to further interpret the optimal control strategies and to highlight real-world application of this game-theoretic optimal solution.
Watershed Management Optimization Support Tool v3
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that facilitates integrated water management at the local or small watershed scale. WMOST models the environmental effects and costs of management decisions in a watershed context that is, accou...
A simulation-optimization-based decision support tool for mitigating traffic congestion.
DOT National Transportation Integrated Search
2009-12-01
"Traffic congestion has grown considerably in the United States over the past twenty years. In this paper, we develop : a robust decision support tool based on simulation optimization to evaluate and recommend congestion-mitigation : strategies to tr...
An optimal repartitioning decision policy
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Reynolds, P. F., Jr.
1986-01-01
A central problem to parallel processing is the determination of an effective partitioning of workload to processors. The effectiveness of any given partition is dependent on the stochastic nature of the workload. The problem of determining when and if the stochastic behavior of the workload has changed enough to warrant the calculation of a new partition is treated. The problem is modeled as a Markov decision process, and an optimal decision policy is derived. Quantification of this policy is usually intractable. A heuristic policy which performs nearly optimally is investigated empirically. The results suggest that the detection of change is the predominant issue in this problem.
Two retailer-supplier supply chain models with default risk under trade credit policy.
Wu, Chengfeng; Zhao, Qiuhong
2016-01-01
The purpose of the paper is to formulate two uncooperative replenishment models with demand and default risk which are the functions of the trade credit period, i.e., a Nash equilibrium model and a supplier-Stackelberg model. Firstly, we present the optimal results of decentralized decision and centralized decision without trade credit. Secondly, we derive the existence and uniqueness conditions of the optimal solutions under the two games, respectively. Moreover, we present a set of theorems and corollary to determine the optimal solutions. Finally, we provide an example and sensitivity analysis to illustrate the proposed strategy and optimal solutions. Sensitivity analysis reveals that the total profits of supply chain under the two games both are better than the results under the centralized decision only if the optimal trade credit period isn't too short. It also reveals that the size of trade credit period, demand, retailer's profit and supplier's profit have strong relationship with the increasing demand coefficient, wholesale price, default risk coefficient and production cost. The major contribution of the paper is that we comprehensively compare between the results of decentralized decision and centralized decision without trade credit, Nash equilibrium and supplier-Stackelberg models with trade credit, and obtain some interesting managerial insights and practical implications.
Veneziano, D.; Agarwal, A.; Karaca, E.
2009-01-01
The problem of accounting for epistemic uncertainty in risk management decisions is conceptually straightforward, but is riddled with practical difficulties. Simple approximations are often used whereby future variations in epistemic uncertainty are ignored or worst-case scenarios are postulated. These strategies tend to produce sub-optimal decisions. We develop a general framework based on Bayesian decision theory and exemplify it for the case of seismic design of buildings. When temporal fluctuations of the epistemic uncertainties and regulatory safety constraints are included, the optimal level of seismic protection exceeds the normative level at the time of construction. Optimal Bayesian decisions do not depend on the aleatory or epistemic nature of the uncertainties, but only on the total (epistemic plus aleatory) uncertainty and how that total uncertainty varies randomly during the lifetime of the project. ?? 2009 Elsevier Ltd. All rights reserved.
Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise
NASA Astrophysics Data System (ADS)
Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej
2010-11-01
The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.
Removal of pharmaceuticals from secondary effluents by an electro-peroxone process.
Yao, Weikun; Wang, Xiaofeng; Yang, Hongwei; Yu, Gang; Deng, Shubo; Huang, Jun; Wang, Bin; Wang, Yujue
2016-01-01
This study compared the removal of pharmaceuticals from secondary effluents of wastewater treatment plants (WWTPs) by conventional ozonation and the electro-peroxone (E-peroxone) process, which involves electrochemically generating H2O2 in-situ from O2 in sparged O2 and O3 gas mixture (i.e., ozone generator effluent) during ozonation. Several pharmaceuticals with kO3 ranging from <0.1 to 6.8 × 10(5) M(-1) s(-1) were spiked into four secondary effluents collected from different WWTPs, and then treated by ozonation and the E-peroxone process. Results show that both processes can rapidly remove ozone reactive pharmaceuticals (diclofenac and gemfibrozil), while the E-peroxone process can considerably accelerate the removal of ozone-refractory pharmaceuticals (e.g., ibuprofen and clofibric acid) via indirect oxidation with OH generated from the reaction of sparged O3 with electro-generated H2O2. Compared with ozonation, the E-peroxone process enhanced the removal kinetics of ozone-refractory pharmaceuticals in the four secondary effluents by ∼40-170%, and the enhancement was more pronounced in secondary effluents that had relatively lower effluent organic matter (EfOM). Due to its higher efficiency for removing ozone-refractory pharmaceuticals, the E-peroxone process reduced the reaction time and electrical energy consumption required to remove ≥90% of all spiked pharmaceuticals from the secondary effluents as compared to ozonation. These results indicate that the E-peroxone process may provide a simple and effective way to improve existing ozonation system for pharmaceutical removal from secondary effluents. Copyright © 2015 Elsevier Ltd. All rights reserved.
Characterizing the biotransformation of sulfur-containing wastes in simulated landfill reactors.
Sun, Wenjie; Sun, Mei; Barlaz, Morton A
2016-07-01
Landfills that accept municipal solid waste (MSW) in the U.S. may also accept a number of sulfur-containing wastes including residues from coal or MSW combustion, and construction and demolition (C&D) waste. Under anaerobic conditions that dominate landfills, microbially mediated processes can convert sulfate to hydrogen sulfide (H2S). The presence of H2S in landfill gas is problematic for several reasons including its low odor threshold, human toxicity, and corrosive nature. The objective of this study was to develop and demonstrate a laboratory-scale reactor method to measure the H2S production potential of a range of sulfur-containing wastes. The H2S production potential was measured in 8-L reactors that were filled with a mixture of the target waste, newsprint as a source of organic carbon required for microbial sulfate reduction, and leachate from decomposed residential MSW as an inoculum. Reactors were operated with and without N2 sparging through the reactors, which was designed to reduce H2S accumulation and toxicity. Both H2S and CH4 yields were consistently higher in reactors that were sparged with N2 although the magnitude of the effect varied. The laboratory-measured first order decay rate constants for H2S and CH4 production were used to estimate constants that were applicable in landfills. The estimated constants ranged from 0.11yr(-1) for C&D fines to 0.38yr(-1) for a mixed fly ash and bottom ash from MSW combustion. Copyright © 2016 Elsevier Ltd. All rights reserved.
Quantify fluid saturation in fractures by light transmission technique and its application
NASA Astrophysics Data System (ADS)
Ye, S.; Zhang, Y.; Wu, J.
2016-12-01
The Dense Non-Aqueous Phase Liquids (DNAPLs) migration in transparent and rough fractures with variable aperture was studied experimentally using a light transmission technique. The migration of trichloroethylene (TCE) in variable-aperture fractures (20 cm wide x 32.5 cm high) showed that a TCE blob moved downward with snap-off events in four packs with apertures from 100 μm to 1000 μm, and that the pattern presented a single and tortuous cluster with many fingers in a pack with two apertures of 100 μm and 500 μm. The variable apertures in the fractures were measured by light transmission. A light intensity-saturation (LIS) model based on light transmission was used to quantify DNAPL saturation in the fracture system. Known volumes of TCE, were added to the chamber and these amounts were compared to the results obtained by LIS model. Strong correlation existed between results obtained based on LIS model and the known volumes of T CE. Sensitivity analysis showed that the aperture was more sensitive than parameter C2 of LIS model. LIS model was also used to measure dyed TCE saturation in air sparging experiment. The results showed that the distribution and amount of TCE significantly influenced the efficient of air sparging. The method developed here give a way to quantify fluid saturation in two-phase system in fractured medium, and provide a non-destructive, non-intrusive tool to investigate changes in DNAPL architecture and flow characteristics in laboratory experiments. Keywords: light transmission, fluid saturation, fracture, variable aperture AcknowledgementsFunding for this research from NSFC Project No. 41472212.
Watershed Management Optimization Support Tool (WMOST) v3: User Guide
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that facilitates integrated water management at the local or small watershed scale. WMOST models the environmental effects and costs of management decisions in a watershed context that is, accou...
Watershed Management Optimization Support Tool (WMOST) v3: Theoretical Documentation
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that facilitates integrated water management at the local or small watershed scale. WMOST models the environmental effects and costs of management decisions in a watershed context, accounting fo...
Watershed Management Optimization Support Tool (WMOST) v2: Theoretical Documentation
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that evaluates the relative cost-effectiveness of management practices at the local or watershed scale. WMOST models the environmental effects and costs of management decisions in a watershed c...
Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals
2016-01-01
This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081
A nonlinear bi-level programming approach for product portfolio management.
Ma, Shuang
2016-01-01
Product portfolio management (PPM) is a critical decision-making for companies across various industries in today's competitive environment. Traditional studies on PPM problem have been motivated toward engineering feasibilities and marketing which relatively pay less attention to other competitors' actions and the competitive relations, especially in mathematical optimization domain. The key challenge lies in that how to construct a mathematical optimization model to describe this Stackelberg game-based leader-follower PPM problem and the competitive relations between them. The primary work of this paper is the representation of a decision framework and the optimization model to leverage the PPM problem of leader and follower. A nonlinear, integer bi-level programming model is developed based on the decision framework. Furthermore, a bi-level nested genetic algorithm is put forward to solve this nonlinear bi-level programming model for leader-follower PPM problem. A case study of notebook computer product portfolio optimization is reported. Results and analyses reveal that the leader-follower bi-level optimization model is robust and can empower product portfolio optimization.
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.
Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A
2013-02-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models.
Game Theory and Risk-Based Levee System Design
NASA Astrophysics Data System (ADS)
Hui, R.; Lund, J. R.; Madani, K.
2014-12-01
Risk-based analysis has been developed for optimal levee design for economic efficiency. Along many rivers, two levees on opposite riverbanks act as a simple levee system. Being rational and self-interested, land owners on each river bank would tend to independently optimize their levees with risk-based analysis, resulting in a Pareto-inefficient levee system design from the social planner's perspective. Game theory is applied in this study to analyze decision making process in a simple levee system in which the land owners on each river bank develop their design strategies using risk-based economic optimization. For each land owner, the annual expected total cost includes expected annual damage cost and annualized construction cost. The non-cooperative Nash equilibrium is identified and compared to the social planner's optimal distribution of flood risk and damage cost throughout the system which results in the minimum total flood cost for the system. The social planner's optimal solution is not feasible without appropriate level of compensation for the transferred flood risk to guarantee and improve conditions for all parties. Therefore, cooperative game theory is then employed to develop an economically optimal design that can be implemented in practice. By examining the game in the reversible and irreversible decision making modes, the cost of decision making myopia is calculated to underline the significance of considering the externalities and evolution path of dynamic water resource problems for optimal decision making.
Optimal strategies for electric energy contract decision making
NASA Astrophysics Data System (ADS)
Song, Haili
2000-10-01
The power industry restructuring in various countries in recent years has created an environment where trading of electric energy is conducted in a market environment. In such an environment, electric power companies compete for the market share through spot and bilateral markets. Being profit driven, electric power companies need to make decisions on spot market bidding, contract evaluation, and risk management. New methods and software tools are required to meet these upcoming needs. In this research, bidding strategy and contract pricing are studied from a market participant's viewpoint; new methods are developed to guide a market participant in spot and bilateral market operation. A supplier's spot market bidding decision is studied. Stochastic optimization is formulated to calculate a supplier's optimal bids in a single time period. This decision making problem is also formulated as a Markov Decision Process. All the competitors are represented by their bidding parameters with corresponding probabilities. A systematic method is developed to calculate transition probabilities and rewards. The optimal strategy is calculated to maximize the expected reward over a planning horizon. Besides the spot market, a power producer can also trade in the bilateral markets. Bidding strategies in a bilateral market are studied with game theory techniques. Necessary and sufficient conditions of Nash Equilibrium (NE) bidding strategy are derived based on the generators' cost and the loads' willingness to pay. The study shows that in any NE, market efficiency is achieved. Furthermore, all Nash equilibria are revenue equivalent for the generators. The pricing of "Flexible" contracts, which allow delivery flexibility over a period of time with a fixed total amount of electricity to be delivered, is analyzed based on the no-arbitrage pricing principle. The proposed algorithm calculates the price based on the optimality condition of the stochastic optimization formulation. Simulation examples illustrate the tradeoffs between prices and scheduling flexibility. Spot bidding and contract pricing are not independent decision processes. The interaction between spot bidding and contract evaluation is demonstrated with game theory equilibrium model and market simulation results. It leads to the conclusion that a market participant's contract decision making needs to be further investigated as an integrated optimization formulation.
Solar energy conversion through biophotolysis. Third annual report, 1 April 1978-31 March 1979
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benemann, J.R.; Murry, M.A.; Hallenbeck, P.C.
This report covers the progress during the third year of this project. The state-of-the-art of biophotolysis was reviewed and a bioengineering analysis carried out. The conclusions were that practical biophotolysis systems are feasible; however, they will require, in most cases, relatively long-term R and D. The biophotolysis system developed under this project, utilizing heterocystous blue-green algae, was demonstrated both indoors and outdoors with a model converter system using the heterocystous blue-grees alga Anabaena cylindrica. Maximal light energy conversion efficiencies were 2.5% indoors and about 0.2% outdoors, averaged for periods of about two weeks. Achievement of such rates required optimization ofmore » N/sub 2/ supply and culture density. A small amount of N/sub 2/ in the argon gas phase used to sparge the cultures was beneficial to the stability of a long-term hydrogen-production activity. A relatively small amount of the hydrogen produced by these cultures can be ascribed to the activity of the reversible hydrogenase which was studied by nitrogenase inactivation through poisoning with tungstate. The regulation of nitrogenase activity in these algae was studied through physiological and immunochemical methods. In particular, the oxygen protection mechanism was examined. Thermophilic blue-green algae have potential for biophotolysis; hydrogen production was studied in the laboratory. Preliminary experiments on the photofermentation of organic substrates to hydrogen was studied with photosynthetic bacteria.« less
Solar energy conversion through biophotolysis. Third annual report, April 1, 1978-March 31, 1979
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benemann, J.R.; Murry, M.A.; Hallenbeck, P.C.
This report covers the progress during the third year of this project. The state-of-the-art of biophotolysis was reviewed and a bioengineering analysis carried out. The conclusions were that practical biophotolysis systems are feasible; however, they will require, in most cases, relatively long-term R and D. The biophotolysis system developed under this project, utilizing heterocystous blue-green algae, was demonstrated both indoors and outdoors with a model converter system using the heterocystous blue-green alga Anabaena cylindrica. Maximal light energy conversion efficiencies were 2.5% indoors and about 0.2% outdoors, averaged for periods of about two weeks. Achievement of such rates required optimization ofmore » N/sub 2/ supply and culture density. A small amount of N/sub 2/ in the argon gas phase used to sparge the cultures was beneficial to the stability of a long-term hydrogen-production activity. A relatively small amount of the hydrogen produced by these cultures can be ascribed to the activity of the reversible hydrogenase which was studied by nitrogenase inactivation through poisoning with tungstate. The regulation of nitrogenase activity in these algae was studied through physiological and immunochemical methods. In particular, the oxygen protection mechanism was examined. Thermophilic blue-green algae have potential for biophotolysis; hydrogen production was studied in the laboratory. Preliminary experiments on the photofermentation of organic substrates to hydrogen was studied with photosynthetic bacteria.« less
A Swarm Optimization approach for clinical knowledge mining.
Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A
2015-10-01
Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Acquisition of decision making criteria: reward rate ultimately beats accuracy.
Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D
2011-02-01
Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.
Confronting dynamics and uncertainty in optimal decision making for conservation
Williams, Byron K.; Johnson, Fred A.
2013-01-01
The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a critically endangered population through captive breeding, control of invasive species, construction of biodiversity reserves, design of landscapes to increase habitat connectivity, and resource exploitation. Although these decision making problems and their solutions present significant challenges, we suggest that a systematic and effective approach to dynamic decision making in conservation need not be an onerous undertaking. The requirements are shared with any systematic approach to decision making--a careful consideration of values, actions, and outcomes.
Confronting dynamics and uncertainty in optimal decision making for conservation
NASA Astrophysics Data System (ADS)
Williams, Byron K.; Johnson, Fred A.
2013-06-01
The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a critically endangered population through captive breeding, control of invasive species, construction of biodiversity reserves, design of landscapes to increase habitat connectivity, and resource exploitation. Although these decision making problems and their solutions present significant challenges, we suggest that a systematic and effective approach to dynamic decision making in conservation need not be an onerous undertaking. The requirements are shared with any systematic approach to decision making—a careful consideration of values, actions, and outcomes.
Decision theory, reinforcement learning, and the brain.
Dayan, Peter; Daw, Nathaniel D
2008-12-01
Decision making is a core competence for animals and humans acting and surviving in environments they only partially comprehend, gaining rewards and punishments for their troubles. Decision-theoretic concepts permeate experiments and computational models in ethology, psychology, and neuroscience. Here, we review a well-known, coherent Bayesian approach to decision making, showing how it unifies issues in Markovian decision problems, signal detection psychophysics, sequential sampling, and optimal exploration and discuss paradigmatic psychological and neural examples of each problem. We discuss computational issues concerning what subjects know about their task and how ambitious they are in seeking optimal solutions; we address algorithmic topics concerning model-based and model-free methods for making choices; and we highlight key aspects of the neural implementation of decision making.
Design of an Aircrew Scheduling Decision Aid for the 6916th Electronic Security Squadron.
1987-06-01
Security Classification) Design of an Aircrew Scheduling Decision Aid for the 6916th Electronic Security Squadron 12. PERSONAL AUTHOR(S) Thomas J. Kopf...Because of the great number of possible scheduling alternatives, it is difficult to find an optimal solution to-the scheduling problem. Additionally...changes to the original schedule make it even more difficult to find an optimal solution. The emergence of capable microcompu- ters, decision support
History matching through dynamic decision-making
Maschio, Célio; Santos, Antonio Alberto; Schiozer, Denis; Rocha, Anderson
2017-01-01
History matching is the process of modifying the uncertain attributes of a reservoir model to reproduce the real reservoir performance. It is a classical reservoir engineering problem and plays an important role in reservoir management since the resulting models are used to support decisions in other tasks such as economic analysis and production strategy. This work introduces a dynamic decision-making optimization framework for history matching problems in which new models are generated based on, and guided by, the dynamic analysis of the data of available solutions. The optimization framework follows a ‘learning-from-data’ approach, and includes two optimizer components that use machine learning techniques, such as unsupervised learning and statistical analysis, to uncover patterns of input attributes that lead to good output responses. These patterns are used to support the decision-making process while generating new, and better, history matched solutions. The proposed framework is applied to a benchmark model (UNISIM-I-H) based on the Namorado field in Brazil. Results show the potential the dynamic decision-making optimization framework has for improving the quality of history matching solutions using a substantial smaller number of simulations when compared with a previous work on the same benchmark. PMID:28582413
Watershed Management Optimization Support Tool (WMOST) v2: User Manual and Case Studies
The Watershed Management Optimization Support Tool (WMOST) is a decision support tool that evaluates the relative cost-effectiveness of management practices at the local or watershed scale. WMOST models the environmental effects and costs of management decisions in a watershed c...
Decision Support for Resilient Communities: EPA’s Watershed Management Optimization Support Tool
The U.S. EPA Atlantic Ecology Division is releasing version 3 of the Watershed Management Optimization Support Tool (WMOST v3) in February 2018. WMOST is a decision-support tool that facilitates integrated water resources management (IWRM) by communities and watershed organizati...
Removing oxygen from a solvent extractant in an uranium recovery process
Hurst, Fred J.; Brown, Gilbert M.; Posey, Franz A.
1984-01-01
An improvement in effecting uranium recovery from phosphoric acid solutions is provided by sparging dissolved oxygen contained in solutions and solvents used in a reductive stripping stage with an effective volume of a nonoxidizing gas before the introduction of the solutions and solvents into the stage. Effective volumes of nonoxidizing gases, selected from the group consisting of argon, carbon dioxide, carbon monoxide, helium, hydrogen, nitrogen, sulfur dioxide, and mixtures thereof, displace oxygen from the solutions and solvents thereby reduce deleterious effects of oxygen such as excessive consumption of elemental or ferrous and accumulation of complex iron phosphates or cruds.
Decision-Aiding and Optimization for Vertical Navigation of Long-Haul Aircraft
NASA Technical Reports Server (NTRS)
Patrick, Nicholas J. M.; Sheridan, Thomas B.
1996-01-01
Most decisions made in the cockpit are related to safety, and have therefore been proceduralized in order to reduce risk. There are very few which are made on the basis of a value metric such as economic cost. One which can be shown to be value based, however, is the selection of a flight profile. Fuel consumption and flight time both have a substantial effect on aircraft operating cost, but they cannot be minimized simultaneously. In addition, winds, turbulence, and performance vary widely with altitude and time. These factors make it important and difficult for pilots to (a) evaluate the outcomes associated with a particular trajectory before it is flown and (b) decide among possible trajectories. The two elements of this problem considered here are: (1) determining what constitutes optimality, and (2) finding optimal trajectories. Pilots and dispatchers from major u.s. airlines were surveyed to determine which attributes of the outcome of a flight they considered the most important. Avoiding turbulence-for passenger comfort-topped the list of items which were not safety related. Pilots' decision making about the selection of flight profile on the basis of flight time, fuel burn, and exposure to turbulence was then observed. Of the several behavioral and prescriptive decision models invoked to explain the pilots' choices, utility maximization is shown to best reproduce the pilots' decisions. After considering more traditional methods for optimizing trajectories, a novel method is developed using a genetic algorithm (GA) operating on a discrete representation of the trajectory search space. The representation is a sequence of command altitudes, and was chosen to be compatible with the constraints imposed by Air Traffic Control, and with the training given to pilots. Since trajectory evaluation for the GA is performed holistically, a wide class of objective functions can be optimized easily. Also, using the GA it is possible to compare the costs associated with different airspace design and air traffic management policies. A decision aid is proposed which would combine the pilot's notion of optimality with the GA-based optimization, provide the pilot with a number of alternative pareto-optimal trajectories, and allow him to consider unmodelled attributes and constraints in choosing among them. A solution to the problem of displaying alternatives in a multi-attribute decision space is also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hong; Wang, Shaobu; Fan, Rui
This report summaries the work performed under the LDRD project on the preliminary study on knowledge automation, where specific focus has been made on the investigation of the impact of uncertainties of human decision making onto the optimization of the process operation. At first the statistics on signals from the Brain-Computing Interface (BCI) is analyzed so as to obtain the uncertainties characterization of human operators during the decision making phase using the electroencephalogram (EEG) signals. This is then followed by the discussions of an architecture that reveals the equivalence between optimization and closed loop feedback control design, where it hasmore » been shown that all the optimization problems can be transferred into the control design problem for closed loop systems. This has led to a “closed loop” framework, where the structure of the decision making is shown to be subjected to both process disturbances and controller’s uncertainties. The latter can well represent the uncertainties or randomness occurred during human decision making phase. As a result, a stochastic optimization problem has been formulated and a novel solution has been proposed using probability density function (PDF) shaping for both the cost function and the constraints using stochastic distribution control concept. A sufficient condition has been derived that guarantees the convergence of the optimal solution and discussions have been made for both the total probabilistic solution and chanced constrained optimization which have been well-studied in optimal power flows (OPF) area. A simple case study has been carried out for the economic dispatch of powers for a grid system when there are distributed energy resources (DERs) in the system, and encouraging results have been obtained showing that a significant savings on the generation cost can be expected.« less
Modeling uncertainty in producing natural gas from tight sands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chermak, J.M.; Dahl, C.A.; Patrick, R.H
1995-12-31
Since accurate geologic, petroleum engineering, and economic information are essential ingredients in making profitable production decisions for natural gas, we combine these ingredients in a dynamic framework to model natural gas reservoir production decisions. We begin with the certainty case before proceeding to consider how uncertainty might be incorporated in the decision process. Our production model uses dynamic optimal control to combine economic information with geological constraints to develop optimal production decisions. To incorporate uncertainty into the model, we develop probability distributions on geologic properties for the population of tight gas sand wells and perform a Monte Carlo study tomore » select a sample of wells. Geological production factors, completion factors, and financial information are combined into the hybrid economic-petroleum reservoir engineering model to determine the optimal production profile, initial gas stock, and net present value (NPV) for an individual well. To model the probability of the production abandonment decision, the NPV data is converted to a binary dependent variable. A logit model is used to model this decision as a function of the above geological and economic data to give probability relationships. Additional ways to incorporate uncertainty into the decision process include confidence intervals and utility theory.« less
The anatomy of choice: dopamine and decision-making
Friston, Karl; Schwartenbeck, Philipp; FitzGerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J.
2014-01-01
This paper considers goal-directed decision-making in terms of embodied or active inference. We associate bounded rationality with approximate Bayesian inference that optimizes a free energy bound on model evidence. Several constructs such as expected utility, exploration or novelty bonuses, softmax choice rules and optimism bias emerge as natural consequences of free energy minimization. Previous accounts of active inference have focused on predictive coding. In this paper, we consider variational Bayes as a scheme that the brain might use for approximate Bayesian inference. This scheme provides formal constraints on the computational anatomy of inference and action, which appear to be remarkably consistent with neuroanatomy. Active inference contextualizes optimal decision theory within embodied inference, where goals become prior beliefs. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (associated with softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution. Crucially, this sensitivity corresponds to the precision of beliefs about behaviour. The changes in precision during variational updates are remarkably reminiscent of empirical dopaminergic responses—and they may provide a new perspective on the role of dopamine in assimilating reward prediction errors to optimize decision-making. PMID:25267823
The anatomy of choice: dopamine and decision-making.
Friston, Karl; Schwartenbeck, Philipp; FitzGerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J
2014-11-05
This paper considers goal-directed decision-making in terms of embodied or active inference. We associate bounded rationality with approximate Bayesian inference that optimizes a free energy bound on model evidence. Several constructs such as expected utility, exploration or novelty bonuses, softmax choice rules and optimism bias emerge as natural consequences of free energy minimization. Previous accounts of active inference have focused on predictive coding. In this paper, we consider variational Bayes as a scheme that the brain might use for approximate Bayesian inference. This scheme provides formal constraints on the computational anatomy of inference and action, which appear to be remarkably consistent with neuroanatomy. Active inference contextualizes optimal decision theory within embodied inference, where goals become prior beliefs. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (associated with softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution. Crucially, this sensitivity corresponds to the precision of beliefs about behaviour. The changes in precision during variational updates are remarkably reminiscent of empirical dopaminergic responses-and they may provide a new perspective on the role of dopamine in assimilating reward prediction errors to optimize decision-making.
NASA Astrophysics Data System (ADS)
Madani, Kaveh
2016-04-01
Water management benefits from a suite of modelling tools and techniques that help simplifying and understanding the complexities involved in managing water resource systems. Early water management models were mainly concerned with optimizing a single objective, related to the design, operations or management of water resource systems (e.g. economic cost, hydroelectricity production, reliability of water deliveries). Significant improvements in methodologies, computational capacity, and data availability over the last decades have resulted in developing more complex water management models that can now incorporate multiple objectives, various uncertainties, and big data. These models provide an improved understanding of complex water resource systems and provide opportunities for making positive impacts. Nevertheless, there remains an alarming mismatch between the optimal solutions developed by these models and the decisions made by managers and stakeholders of water resource systems. Modelers continue to consider decision makers as irrational agents who fail to implement the optimal solutions developed by sophisticated and mathematically rigours water management models. On the other hand, decision makers and stakeholders accuse modelers of being idealist, lacking a perfect understanding of reality, and developing 'smart' solutions that are not practical (stable). In this talk I will have a closer look at the mismatch between the optimality and stability of solutions and argue that conventional water resources management models suffer inherently from a full-cooperation assumption. According to this assumption, water resources management decisions are based on group rationality where in practice decisions are often based on individual rationality, making the group's optimal solution unstable for individually rational decision makers. I discuss how game theory can be used as an appropriate framework for addressing the irrational "rationality assumption" of water resources management models and for better capturing the social aspects of decision making in water management systems with multiple stakeholders.
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach
Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.
2014-01-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856
Using ILOG OPL-CPLEX and ILOG Optimization Decision Manager (ODM) to Develop Better Models
NASA Astrophysics Data System (ADS)
2008-10-01
This session will provide an in-depth overview on building state-of-the-art decision support applications and models. You will learn how to harness the full power of the ILOG OPL-CPLEX-ODM Development System (ODMS) to develop optimization models and decision support applications that solve complex problems ranging from near real-time scheduling to long-term strategic planning. We will demonstrate how to use ILOG's Open Programming Language (OPL) to quickly model problems solved by ILOG CPLEX, and how to use ILOG ODM to gain further insight about the model. By the end of the session, attendees will understand how to take advantage of the powerful combination of ILOG OPL (to describe an optimization model) and ILOG ODM (to understand the relationships between data, decision variables and constraints).
Antagonistic and Bargaining Games in Optimal Marketing Decisions
ERIC Educational Resources Information Center
Lipovetsky, S.
2007-01-01
Game theory approaches to find optimal marketing decisions are considered. Antagonistic games with and without complete information, and non-antagonistic games techniques are applied to paired comparison, ranking, or rating data for a firm and its competitors in the market. Mix strategy, equilibrium in bi-matrix games, bargaining models with…
Optimal Decision Making in Neural Inhibition Models
ERIC Educational Resources Information Center
van Ravenzwaaij, Don; van der Maas, Han L. J.; Wagenmakers, Eric-Jan
2012-01-01
In their influential "Psychological Review" article, Bogacz, Brown, Moehlis, Holmes, and Cohen (2006) discussed optimal decision making as accomplished by the drift diffusion model (DDM). The authors showed that neural inhibition models, such as the leaky competing accumulator model (LCA) and the feedforward inhibition model (FFI), can mimic the…
State dependent optimization of measurement policy
NASA Astrophysics Data System (ADS)
Konkarikoski, K.
2010-07-01
Measurements are the key to rational decision making. Measurement information generates value, when it is applied in the decision making. An investment cost and maintenance costs are associated with each component of the measurement system. Clearly, there is - under a given set of scenarios - a measurement setup that is optimal in expected (discounted) utility. This paper deals how the measurement policy optimization is affected by different system states and how this problem can be tackled.
Adaptive sampling of information in perceptual decision-making.
Cassey, Thomas C; Evens, David R; Bogacz, Rafal; Marshall, James A R; Ludwig, Casimir J H
2013-01-01
In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two-alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui
Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of windmore » power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.« less
Optimization Research of Generation Investment Based on Linear Programming Model
NASA Astrophysics Data System (ADS)
Wu, Juan; Ge, Xueqian
Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.
Age Effects and Heuristics in Decision Making*
Besedeš, Tibor; Deck, Cary; Sarangi, Sudipta; Shor, Mikhael
2011-01-01
Using controlled experiments, we examine how individuals make choices when faced with multiple options. Choice tasks are designed to mimic the selection of health insurance, prescription drug, or retirement savings plans. In our experiment, available options can be objectively ranked allowing us to examine optimal decision making. First, the probability of a person selecting the optimal option declines as the number of options increases, with the decline being more pronounced for older subjects. Second, heuristics differ by age with older subjects relying more on suboptimal decision rules. In a heuristics validation experiment, older subjects make worse decisions than younger subjects. PMID:22544977
Age Effects and Heuristics in Decision Making.
Besedeš, Tibor; Deck, Cary; Sarangi, Sudipta; Shor, Mikhael
2012-05-01
Using controlled experiments, we examine how individuals make choices when faced with multiple options. Choice tasks are designed to mimic the selection of health insurance, prescription drug, or retirement savings plans. In our experiment, available options can be objectively ranked allowing us to examine optimal decision making. First, the probability of a person selecting the optimal option declines as the number of options increases, with the decline being more pronounced for older subjects. Second, heuristics differ by age with older subjects relying more on suboptimal decision rules. In a heuristics validation experiment, older subjects make worse decisions than younger subjects.
Design and implementation of intelligent electronic warfare decision making algorithm
NASA Astrophysics Data System (ADS)
Peng, Hsin-Hsien; Chen, Chang-Kuo; Hsueh, Chi-Shun
2017-05-01
Electromagnetic signals and the requirements of timely response have been a rapid growth in modern electronic warfare. Although jammers are limited resources, it is possible to achieve the best electronic warfare efficiency by tactical decisions. This paper proposes the intelligent electronic warfare decision support system. In this work, we develop a novel hybrid algorithm, Digital Pheromone Particle Swarm Optimization, based on Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO) and Shuffled Frog Leaping Algorithm (SFLA). We use PSO to solve the problem and combine the concept of pheromones in ACO to accumulate more useful information in spatial solving process and speed up finding the optimal solution. The proposed algorithm finds the optimal solution in reasonable computation time by using the method of matrix conversion in SFLA. The results indicated that jammer allocation was more effective. The system based on the hybrid algorithm provides electronic warfare commanders with critical information to assist commanders in effectively managing the complex electromagnetic battlefield.
Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J
2017-06-01
In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision alternatives. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roth, R.J.; Bianco, P.; Kirshner, M.
1996-12-31
Jet fuel contaminated soil and groundwater contaminated at the International Arrivals Building (IAB) of the JFK International Airport in Jamaica, New York, are being remediated using soil vapor extraction (SVE) and air sparging (AS). The areal extent of the contaminated soil is estimated to be 70 acres and the volume of contaminated groundwater is estimated to be 2.3 million gallons. The remediation uses approximately 13,000 feet of horizontal SVE (HSVE) wells and 7,000 feet of horizontal AS (HAS) wells. The design of the HSVE and HAS wells was based on a pilot study followed by a full-scale test. In additionmore » to the horizontal wells, 28 vertical AS wells and 15 vertical SVE wells are used. Three areas are being remediated, thus, three separate treatment systems have been installed. The SVE and AS wells are operated continuously while groundwater will be intermittently extracted at each HAS well, treated by liquid phase activated carbon and discharged into stormwater collection sewerage. Vapors extracted by the SVE wells are treated by vapor phase activated carbon and discharged into ambient air. The duration of the remediation is anticipated to be between two and three years before soil and groundwater are remediated to New York State cleanup criteria for the site. Based on the monitoring data for the first two months of operation, approximately 14,600 lbs. of vapor phase VOCs have been extracted. Analyses show that the majority of the VOCs are branched alkanes, branched alkenes, cyclohexane and methylated cyclohexanes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamberg, L.D.
1998-02-23
This document serves as a notice of construction (NOC), pursuant to the requirements of Washington Administrative Code (WAC) 246-247-060, and as a request for approval to construct, pursuant to 40 Code of Federal Regulations (CFR) 61.07, for the Integrated Water Treatment System (IWTS) Filter Vessel Sparging Vent at 105-KW Basin. Additionally, the following description, and references are provided as the notices of startup, pursuant to 40 CFR 61.09(a)(1) and (2) in accordance with Title 40 Code of Federal Regulations, Part 61, National Emission Standards for Hazardous Air Pollutants. The 105-K West Reactor and its associated spent nuclear fuel (SNF) storagemore » basin were constructed in the early 1950s and are located on the Hanford Site in the 100-K Area about 1,400 feet from the Columbia River. The 105-KW Basin contains 964 Metric Tons of SNF stored under water in approximately 3,800 closed canisters. This SNF has been stored for varying periods of time ranging from 8 to 17 years. The 105-KW Basin is constructed of concrete with an epoxy coating and contains approximately 1.3 million gallons of water with an asphaltic membrane beneath the pool. The IWTS, which has been described in the Radioactive Air Emissions NOC for Fuel Removal for 105-KW Basin (DOE/RL-97-28 and page changes per US Department of Energy, Richland Operations Office letter 97-EAP-814) will be used to remove radionuclides from the basin water during fuel removal operations. The purpose of the modification described herein is to provide operational flexibility for the IWTS at the 105-KW basin. The proposed modification is scheduled to begin in calendar year 1998.« less
Hydroxide stabilization as a new tool for ballast disinfection: Efficacy of treatment on zooplankton
Moffitt, Christine M.; Watten, Barnaby J.; Barenburg, Amber; Henquinet, Jeffrey
2015-01-01
Effective and economical tools are needed for treating ship ballast to meet new regulatory requirements designed to reduce the introduction of invasive aquatic species from ship traffic. We tested the efficacy of hydroxide stabilization as a ballast disinfection tool in replicated, sequential field trials on board the M/V Ranger III in waters of Lake Superior. Ballast water was introduced into each of four identical 1,320 L stainless steel tanks during a simulated ballasting operation. Two tanks were treated with NaOH to elevate the pH to 11.7 and the remaining two tanks were held as controls without pH alteration. After retention on board for 14–18 h, CO2-rich gas recovered from one of two diesel propulsion engines was sparged into tanks treated with NaOH for 2 h to force conversion of NaOH ultimately to sodium bicarbonate, thereby lowering pH to about 7.1. Prior to gas sparging, the engine exhaust was treated by a unique catalytic converter/wet scrubber process train to remove unwanted combustion byproducts and to provide cooling. The contents of each tank were then drained and filtered through 35-µm mesh plankton nets to collect all zooplankton. The composition and relative survival of zooplankton in each tank were evaluated by microscopy. Zooplankton populations were dominated by rotifers, but copepods and cladocerans were also observed. Hydroxide stabilization was 100% effective in killing all zooplankton present at the start of the tests. Our results suggest hydroxide stabilization has potential to be an effective and practical tool to disinfect ship ballast. Further, using CO2 released from the ship engine reduces emissions and the neutralized by product, sodium bicarbonate, can have beneficial impacts on the aquatic environment.
Liang, Chenju; Lee, I-Ling
2008-09-10
In situ chemical oxidation (ISCO) is considered a reliable technology to treat groundwater contaminated with high concentrations of organic contaminants. An ISCO oxidant, persulfate anion (S(2)O(8)(2-)) can be activated by ferrous ion (Fe(2+)) to generate sulfate radicals (E(o)=2.6 V), which are capable of destroying trichloroethylene (TCE). The property of polarity inhibits S(2)O(8)(2-) or sulfate radical (SO(4)(-)) from effectively oxidizing separate phase TCE, a dense non-aqueous phase liquid (DNAPL). Thus the oxidation primarily takes place in the aqueous phase where TCE is dissolved. A bench column study was conducted to demonstrate a conceptual remediation method by flushing either S(2)O(8)(2-) or Fe(2+) through a soil column, where the TCE DNAPL was present, and passing the dissolved mixture through either a Fe(2+) or S(2)O(8)(2-) fluid sparging curtain. Also, the effect of a solubility enhancing chemical, hydroxypropyl-beta-cyclodextrin (HPCD), was tested to evaluate its ability to increase the aqueous TCE concentration. Both flushing arrangements may result in similar TCE degradation efficiencies of 35% to 42% estimated by the ratio of TCE degraded/(TCE degraded+TCE remained in effluent) and degradation byproduct chloride generation rates of 4.9 to 7.6 mg Cl(-) per soil column pore volume. The addition of HPCD did greatly increase the aqueous TCE concentration. However, the TCE degradation efficiency decreased because the TCE degradation was a lower percentage of the relatively greater amount of dissolved TCE by HPCD. This conceptual treatment may serve as a reference for potential on-site application.
Laboratory studies to characterize the efficacy of sand capping a coal tar-contaminated sediment.
Hyun, Seunghun; Jafvert, Chad T; Lee, Linda S; Rao, P Suresh C
2006-06-01
Placement of a microbial active sand cap on a coal tar-contaminated river sediment has been suggested as a cost effective remediation strategy. This approach assumes that the flux of contaminants from the sediment is sufficiently balanced by oxygen and nutrient fluxes into the sand layer such that microbial activity will reduce contaminant concentrations within the new benthic zone and reduce the contaminant flux to the water column. The dynamics of such a system were evaluated using batch and column studies with microbial communities from tar-contaminated sediment under different aeration and nutrient inputs. In a 30-d batch degradation study on aqueous extracts of coal tar sediment, oxygen and nutrient concentrations were found to be key parameters controlling the degradation rates of polycyclic aromatic hydrocarbons (PAHs). For the five PAHs monitored (naphthalene, fluorene, phenanthrene, anthracene, and pyrene), degradation rates were inversely proportional to molecular size. For the column studies, where three columns were packed with a 20-cm sand layer on the top of a 5 cm of sediment layer, flow was established to sand layers with (1) aerated water, (2) N(2) sparged water, or (3) HgCl(2)-sterilized N(2) sparged water. After steady-state conditions, PAH concentrations in effluents were the lowest in the aerated column, except for pyrene, whose concentration was invariant with all effluents. These laboratory scale studies support that if sufficient aeration can be achieved in the field through either active and passive means, the resulting microbially active sand layer can improve the water quality of the benthic zone and reduce the flux of many, but not all, PAHs to the water column.
Monitoring TCE Degradation by In-situ Bioremediation in TCE-Contaminated site
NASA Astrophysics Data System (ADS)
Han, K.; Hong, U.; Ahn, G.; Jiang, H.; Yoo, H.; Park, S.; Kim, N.; Ahn, H.; Kwon, S.; Kim, Y.
2012-12-01
Trichloroethylene (TCE) is a long-term common groundwater pollutant because the compound with high density is slowly released into groundwater. Physical and chemical remediation processes have been used to clean-up the contaminant, but novel remediation technology is required to overcome a low efficiency of the traditional treatment process. Many researchers focused on biological process using an anaerobic TCE degrading culture, but it still needs to evaluate whether the process can be applied into field scale under aerobic condition. Therefore, in this work we investigated two different tests (i.e., biostimulation and bioaugmentation) of biological remediation through the Well-to-Well test (injection well to extraction well) in TCE-contaminated site. Also solutions (Electron donor & acceptor, tracer) were injected into the aquifer using a liquid coupled with nitrogen gas sparging. In biostimulation, we use 3 phases to monitoring biological remediation. Phase 1: we inject formate solution to get electron donor hydrogen (hydrogen can be generated from fermentation of formate). We also inject bromide as tracer. Phase 2: we made injection solution by formate, bromide and sulfate. The reason why we inject sulfate is that as a kind of electron accepter, sulfate reduction process is helpful to create anaerobic condition. Phase 3: we inject mixed solution made by formate, sulfate, fumarate, and bromide. The degradation of fumarate has the same mechanism and condition with TCE degradation, so we added fumarate to make sure that if the anaerobic TCE degradation by indigenous microorganisms started up (Because low TCE concentration by gas sparging). In the bioaugmentation test, we inject the Evanite culture (containing dehalococcoides spp) and TCE degradation to c-DCE, VC, ETH was monitored. We are evaluating the transport of the Evanite culture in the field by measuring TCE and VC reductases.
Frequencies of decision making and monitoring in adaptive resource management
Johnson, Fred A.
2017-01-01
Adaptive management involves learning-oriented decision making in the presence of uncertainty about the responses of a resource system to management. It is implemented through an iterative sequence of decision making, monitoring and assessment of system responses, and incorporating what is learned into future decision making. Decision making at each point is informed by a value or objective function, for example total harvest anticipated over some time frame. The value function expresses the value associated with decisions, and it is influenced by system status as updated through monitoring. Often, decision making follows shortly after a monitoring event. However, it is certainly possible for the cadence of decision making to differ from that of monitoring. In this paper we consider different combinations of annual and biennial decision making, along with annual and biennial monitoring. With biennial decision making decisions are changed only every other year; with biennial monitoring field data are collected only every other year. Different cadences of decision making combine with annual and biennial monitoring to define 4 scenarios. Under each scenario we describe optimal valuations for active and passive adaptive decision making. We highlight patterns in valuation among scenarios, depending on the occurrence of monitoring and decision making events. Differences between years are tied to the fact that every other year a new decision can be made no matter what the scenario, and state information is available to inform that decision. In the subsequent year, however, in 3 of the 4 scenarios either a decision is repeated or monitoring does not occur (or both). There are substantive differences in optimal values among the scenarios, as well as the optimal policies producing those values. Especially noteworthy is the influence of monitoring cadence on valuation in some years. We highlight patterns in policy and valuation among the scenarios, and discuss management implications and extensions. PMID:28800591
Frequencies of decision making and monitoring in adaptive resource management
Williams, Byron K.; Johnson, Fred A.
2017-01-01
Adaptive management involves learning-oriented decision making in the presence of uncertainty about the responses of a resource system to management. It is implemented through an iterative sequence of decision making, monitoring and assessment of system responses, and incorporating what is learned into future decision making. Decision making at each point is informed by a value or objective function, for example total harvest anticipated over some time frame. The value function expresses the value associated with decisions, and it is influenced by system status as updated through monitoring. Often, decision making follows shortly after a monitoring event. However, it is certainly possible for the cadence of decision making to differ from that of monitoring. In this paper we consider different combinations of annual and biennial decision making, along with annual and biennial monitoring. With biennial decision making decisions are changed only every other year; with biennial monitoring field data are collected only every other year. Different cadences of decision making combine with annual and biennial monitoring to define 4 scenarios. Under each scenario we describe optimal valuations for active and passive adaptive decision making. We highlight patterns in valuation among scenarios, depending on the occurrence of monitoring and decision making events. Differences between years are tied to the fact that every other year a new decision can be made no matter what the scenario, and state information is available to inform that decision. In the subsequent year, however, in 3 of the 4 scenarios either a decision is repeated or monitoring does not occur (or both). There are substantive differences in optimal values among the scenarios, as well as the optimal policies producing those values. Especially noteworthy is the influence of monitoring cadence on valuation in some years. We highlight patterns in policy and valuation among the scenarios, and discuss management implications and extensions.
Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie
2018-05-18
Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.
A Framework for Modeling Emerging Diseases to Inform Management
Katz, Rachel A.; Richgels, Katherine L.D.; Walsh, Daniel P.; Grant, Evan H.C.
2017-01-01
The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge. PMID:27983501
The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.
Norris, Dennis
2006-04-01
This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers. ((c) 2006 APA, all rights reserved).
"Utilizing" signal detection theory.
Lynn, Spencer K; Barrett, Lisa Feldman
2014-09-01
What do inferring what a person is thinking or feeling, judging a defendant's guilt, and navigating a dimly lit room have in common? They involve perceptual uncertainty (e.g., a scowling face might indicate anger or concentration, for which different responses are appropriate) and behavioral risk (e.g., a cost to making the wrong response). Signal detection theory describes these types of decisions. In this tutorial, we show how incorporating the economic concept of utility allows signal detection theory to serve as a model of optimal decision making, going beyond its common use as an analytic method. This utility approach to signal detection theory clarifies otherwise enigmatic influences of perceptual uncertainty on measures of decision-making performance (accuracy and optimality) and on behavior (an inverse relationship between bias magnitude and sensitivity optimizes utility). A "utilized" signal detection theory offers the possibility of expanding the phenomena that can be understood within a decision-making framework. © The Author(s) 2014.
A Framework for Modeling Emerging Diseases to Inform Management.
Russell, Robin E; Katz, Rachel A; Richgels, Katherine L D; Walsh, Daniel P; Grant, Evan H C
2017-01-01
The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge.
Probabilistic vs. non-probabilistic approaches to the neurobiology of perceptual decision-making
Drugowitsch, Jan; Pouget, Alexandre
2012-01-01
Optimal binary perceptual decision making requires accumulation of evidence in the form of a probability distribution that specifies the probability of the choices being correct given the evidence so far. Reward rates can then be maximized by stopping the accumulation when the confidence about either option reaches a threshold. Behavioral and neuronal evidence suggests that humans and animals follow such a probabilitistic decision strategy, although its neural implementation has yet to be fully characterized. Here we show that that diffusion decision models and attractor network models provide an approximation to the optimal strategy only under certain circumstances. In particular, neither model type is sufficiently flexible to encode the reliability of both the momentary and the accumulated evidence, which is a pre-requisite to accumulate evidence of time-varying reliability. Probabilistic population codes, in contrast, can encode these quantities and, as a consequence, have the potential to implement the optimal strategy accurately. PMID:22884815
A framework for modeling emerging diseases to inform management
Russell, Robin E.; Katz, Rachel A.; Richgels, Katherine L. D.; Walsh, Daniel P.; Grant, Evan H. Campbell
2017-01-01
The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge.
Bayesian Phase II optimization for time-to-event data based on historical information.
Bertsche, Anja; Fleischer, Frank; Beyersmann, Jan; Nehmiz, Gerhard
2017-01-01
After exploratory drug development, companies face the decision whether to initiate confirmatory trials based on limited efficacy information. This proof-of-concept decision is typically performed after a Phase II trial studying a novel treatment versus either placebo or an active comparator. The article aims to optimize the design of such a proof-of-concept trial with respect to decision making. We incorporate historical information and develop pre-specified decision criteria accounting for the uncertainty of the observed treatment effect. We optimize these criteria based on sensitivity and specificity, given the historical information. Specifically, time-to-event data are considered in a randomized 2-arm trial with additional prior information on the control treatment. The proof-of-concept criterion uses treatment effect size, rather than significance. Criteria are defined on the posterior distribution of the hazard ratio given the Phase II data and the historical control information. Event times are exponentially modeled within groups, allowing for group-specific conjugate prior-to-posterior calculation. While a non-informative prior is placed on the investigational treatment, the control prior is constructed via the meta-analytic-predictive approach. The design parameters including sample size and allocation ratio are then optimized, maximizing the probability of taking the right decision. The approach is illustrated with an example in lung cancer.
Demographics of reintroduced populations: estimation, modeling, and decision analysis
Converse, Sarah J.; Moore, Clinton T.; Armstrong, Doug P.
2013-01-01
Reintroduction can be necessary for recovering populations of threatened species. However, the success of reintroduction efforts has been poorer than many biologists and managers would hope. To increase the benefits gained from reintroduction, management decision making should be couched within formal decision-analytic frameworks. Decision analysis is a structured process for informing decision making that recognizes that all decisions have a set of components—objectives, alternative management actions, predictive models, and optimization methods—that can be decomposed, analyzed, and recomposed to facilitate optimal, transparent decisions. Because the outcome of interest in reintroduction efforts is typically population viability or related metrics, models used in decision analysis efforts for reintroductions will need to include population models. In this special section of the Journal of Wildlife Management, we highlight examples of the construction and use of models for informing management decisions in reintroduced populations. In this introductory contribution, we review concepts in decision analysis, population modeling for analysis of decisions in reintroduction settings, and future directions. Increased use of formal decision analysis, including adaptive management, has great potential to inform reintroduction efforts. Adopting these practices will require close collaboration among managers, decision analysts, population modelers, and field biologists.
Decision-Aiding and Optimization for Vertical Navigation of Long-Haul Aircraft
NASA Technical Reports Server (NTRS)
Patrick, Nicholas J. M.; Sheridan, Thomas B.
1996-01-01
Most decisions made in the cockpit are related to safety, and have therefore been proceduralized in order to reduce risk. There are very few which are made on the basis of a value metric such as economic cost. One which can be shown to be value based, however, is the selection of a flight profile. Fuel consumption and flight time both have a substantial effect on aircraft operating cost, but they cannot be minimized simultaneously. In addition, winds, turbulence, and performance x,ary widely with altitude and time. These factors make it important and difficult for pilots to (a) evaluate the outcomes associated with a particular trajectory before it is flown and (b) decide among possible trajectories. The two elements of this problem considered here are (1) determining, what constitutes optimality, and (2) finding optimal trajectories. Pilots and dispatchers from major U.S. airlines were surveyed to determine which attributes of the outcome of a flight they considered the most important. Avoiding turbulence-for passenger comfort topped the list of items which were not safety related. Pilots' decision making about the selection of flight profile on the basis of flight time, fuel burn, and exposure to turbulence was then observed. Of the several behavioral and prescriptive decision models invoked to explain the pilots' choices, utility maximization is shown to best reproduce the pilots' decisions. After considering more traditional methods for optimizing trajectories, a novel method is developed using a genetic algorithm (GA) operating on a discrete representation of the trajectory search space. The representation is a sequence of command altitudes, and was chosen to be compatible with the constraints imposed by Air Traffic Control, and with the training given to pilots. Since trajectory evaluation for the GA is performed holistically, a wide class of objective functions can be optimized easily. Also, using the GA it is possible to compare the costs associated with different airspace design and air traffic management policies. A decision aid is proposed which would combine the pilot's notion of optimility with the GA-based optimization, provide the pilot with a number of alternative pareto-optimal trajectories, and allow him to consider un-modelled attributes and constraints in choosing among them. A solution to the problem of displaying alternatives in a multi-attribute decision space is also presented.
Shared decision-making and decision support: their role in obstetrics and gynecology.
Tucker Edmonds, Brownsyne
2014-12-01
To discuss the role for shared decision-making in obstetrics/gynecology and to review evidence on the impact of decision aids on reproductive health decision-making. Among the 155 studies included in a 2014 Cochrane review of decision aids, 31 (29%) addressed reproductive health decisions. Although the majority did not show evidence of an effect on treatment choice, there was a greater uptake of mammography in selected groups of women exposed to decision aids compared with usual care; and a statistically significant reduction in the uptake of hormone replacement therapy among detailed decision aid users compared with simple decision aid users. Studies also found an effect on patient-centered outcomes of care, such as medication adherence, quality-of-life measures, and anxiety scores. In maternity care, only decision analysis tools affected final treatment choice, and patient-directed aids yielded no difference in planned mode of birth after cesarean. There is untapped potential for obstetricians/gynecologists to optimize decision support for reproductive health decisions. Given the limited evidence-base guiding practice, the preference-sensitive nature of reproductive health decisions, and the increase in policy efforts and financial incentives to optimize patients' satisfaction, it is increasingly important for obstetricians/gynecologists to appreciate the role of shared decision-making and decision support in providing patient-centered reproductive healthcare.
Prahl, Andrew; Dexter, Franklin; Braun, Michael T; Van Swol, Lyn
2013-11-01
Because operating room (OR) management decisions with optimal choices are made with ubiquitous biases, decisions are improved with decision-support systems. We reviewed experimental social-psychology studies to explore what an OR leader can do when working with stakeholders lacking interest in learning the OR management science but expressing opinions about decisions, nonetheless. We considered shared information to include the rules-of-thumb (heuristics) that make intuitive sense and often seem "close enough" (e.g., staffing is planned based on the average workload). We considered unshared information to include the relevant mathematics (e.g., staffing calculations). Multiple studies have shown that group discussions focus more on shared than unshared information. Quality decisions are more likely when all group participants share knowledge (e.g., have taken a course in OR management science). Several biases in OR management are caused by humans' limited abilities to estimate tails of probability distributions in their heads. Groups are more susceptible to analogous biases than are educated individuals. Since optimal solutions are not demonstrable without groups sharing common language, only with education of most group members can a knowledgeable individual influence the group. The appropriate model of decision-making is autocratic, with information obtained from stakeholders. Although such decisions are good quality, the leaders often are disliked and the decisions considered unjust. In conclusion, leaders will find the most success if they do not bring OR management operational decisions to groups, but instead act autocratically while obtaining necessary information in 1:1 conversations. The only known route for the leader making such decisions to be considered likable and for the decisions to be considered fair is through colleagues and subordinates learning the management science.
NASA Astrophysics Data System (ADS)
Wu, Zhihui; Chen, Dongyan; Yu, Hui
2016-07-01
In this paper, the problem of the coordination policy is investigated for vendor-managed consignment inventory supply chain subject to consumer return. Here, the market demand is assumed to be affected by promotional effort and consumer return policy. The optimal consignment inventory and the optimal promotional effort level are proposed under the decentralized and centralized decisions. Based on the optimal decision conditions, the markdown allowance-promotional cost-sharing contract is investigated to coordinate the supply chain. Subsequently, the comparison between the two extreme policies shows that full-refund policy dominates the no-return policy when the returning cost and the positive effect of return policy are satisfied certain conditions. Finally, a numerical example is provided to illustrate the impacts of consumer return policy on the coordination contract and optimal profit as well as the effectiveness of the proposed supply chain decision.
Uncertainty quantification and optimal decisions
2017-01-01
A mathematical model can be analysed to construct policies for action that are close to optimal for the model. If the model is accurate, such policies will be close to optimal when implemented in the real world. In this paper, the different aspects of an ideal workflow are reviewed: modelling, forecasting, evaluating forecasts, data assimilation and constructing control policies for decision-making. The example of the oil industry is used to motivate the discussion, and other examples, such as weather forecasting and precision agriculture, are used to argue that the same mathematical ideas apply in different contexts. Particular emphasis is placed on (i) uncertainty quantification in forecasting and (ii) how decisions are optimized and made robust to uncertainty in models and judgements. This necessitates full use of the relevant data and by balancing costs and benefits into the long term may suggest policies quite different from those relevant to the short term. PMID:28484343
Optimization of Water Resources and Agricultural Activities for Economic Benefit in Colorado
NASA Astrophysics Data System (ADS)
LIM, J.; Lall, U.
2017-12-01
The limited water resources available for irrigation are a key constraint for the important agricultural sector of Colorado's economy. As climate change and groundwater depletion reshape these resources, it is essential to understand the economic potential of water resources under different agricultural production practices. This study uses a linear programming optimization at the county spatial scale and annual temporal scales to study the optimal allocation of water withdrawal and crop choices. The model, AWASH, reflects streamflow constraints between different extraction points, six field crops, and a distinct irrigation decision for maize and wheat. The optimized decision variables, under different environmental, social, economic, and physical constraints, provide long-term solutions for ground and surface water distribution and for land use decisions so that the state can generate the maximum net revenue. Colorado, one of the largest agricultural producers, is tested as a case study and the sensitivity on water price and on climate variability is explored.
Dimensions of design space: a decision-theoretic approach to optimal research design.
Conti, Stefano; Claxton, Karl
2009-01-01
Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.
Jefford, Elaine; Jomeen, Julie; Martin, Colin R
2016-04-28
The ability to act on and justify clinical decisions as autonomous accountable midwifery practitioners, is encompassed within many international regulatory frameworks, yet decision-making within midwifery is poorly defined. Decision-making theories from medicine and nursing may have something to offer, but fail to take into consideration midwifery context and philosophy and the decisional autonomy of women. Using an underpinning qualitative methodology, a decision-making framework was developed, which identified Good Clinical Reasoning and Good Midwifery Practice as two conditions necessary to facilitate optimal midwifery decision-making during 2nd stage labour. This study aims to confirm the robustness of the framework and describe the development of Enhancing Decision-making Assessment in Midwifery (EDAM) as a measurement tool through testing of its factor structure, validity and reliability. A cross-sectional design for instrument development and a 2 (country; Australia/UK) x 2 (Decision-making; optimal/sub-optimal) between-subjects design for instrument evaluation using exploratory and confirmatory factor analysis, internal consistency and known-groups validity. Two 'expert' maternity panels, based in Australia and the UK, comprising of 42 participants assessed 16 midwifery real care episode vignettes using the empirically derived 26 item framework. Each item was answered on a 5 point likert scale based on the level of agreement to which the participant felt each item was present in each of the vignettes. Participants were then asked to rate the overall decision-making (optimal/sub-optimal). Post factor analysis the framework was reduced to a 19 item EDAM measure, and confirmed as two distinct scales of 'Clinical Reasoning' (CR) and 'Midwifery Practice' (MP). The CR scale comprised of two subscales; 'the clinical reasoning process' and 'integration and intervention'. The MP scale also comprised two subscales; women's relationship with the midwife' and 'general midwifery practice'. EDAM would generally appear to be a robust, valid and reliable psychometric instrument for measuring midwifery decision-making, which performs consistently across differing international contexts. The 'women's relationship with midwife' subscale marginally failed to meet the threshold for determining good instrument reliability, which may be due to its brevity. Further research using larger samples and in a wider international context to confirm the veracity of the instrument's measurement properties and its wider global utility, would be advantageous.
Optimal inventories for overhaul of repairable redundant systems - A Markov decision model
NASA Technical Reports Server (NTRS)
Schaefer, M. K.
1984-01-01
A Markovian decision model was developed to calculate the optimal inventory of repairable spare parts for an avionics control system for commercial aircraft. Total expected shortage costs, repair costs, and holding costs are minimized for a machine containing a single system of redundant parts. Transition probabilities are calculated for each repair state and repair rate, and optimal spare parts inventory and repair strategies are determined through linear programming. The linear programming solutions are given in a table.
Optimal dynamic control of resources in a distributed system
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Krishna, C. M.; Lee, Yann-Hang
1989-01-01
The authors quantitatively formulate the problem of controlling resources in a distributed system so as to optimize a reward function and derive optimal control strategies using Markov decision theory. The control variables treated are quite general; they could be control decisions related to system configuration, repair, diagnostics, files, or data. Two algorithms for resource control in distributed systems are derived for time-invariant and periodic environments, respectively. A detailed example to demonstrate the power and usefulness of the approach is provided.
Development Optimization and Uncertainty Analysis Methods for Oil and Gas Reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ettehadtavakkol, Amin, E-mail: amin.ettehadtavakkol@ttu.edu; Jablonowski, Christopher; Lake, Larry
Uncertainty complicates the development optimization of oil and gas exploration and production projects, but methods have been devised to analyze uncertainty and its impact on optimal decision-making. This paper compares two methods for development optimization and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each method. Development optimization involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum designmore » concept. The gas field development problem is designed to highlight the differences in the implementation of the two methods and to show that both methods yield the exact same optimum design. The results show that both MC optimization and stochastic programming provide unique benefits, and that the choice of method depends on the goal of the analysis. While the MC method generates more useful information, along with the optimum design configuration, the stochastic programming method is more computationally efficient in determining the optimal solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development optimization under uncertainty for these reservoirs, and solve an example on the design optimization of a multicompartment, multilayer oilfield development.« less
Doing our best: optimization and the management of risk.
Ben-Haim, Yakov
2012-08-01
Tools and concepts of optimization are widespread in decision-making, design, and planning. There is a moral imperative to "do our best." Optimization underlies theories in physics and biology, and economic theories often presume that economic agents are optimizers. We argue that in decisions under uncertainty, what should be optimized is robustness rather than performance. We discuss the equity premium puzzle from financial economics, and explain that the puzzle can be resolved by using the strategy of satisficing rather than optimizing. We discuss design of critical technological infrastructure, showing that satisficing of performance requirements--rather than optimizing them--is a preferable design concept. We explore the need for disaster recovery capability and its methodological dilemma. The disparate domains--economics and engineering--illuminate different aspects of the challenge of uncertainty and of the significance of robust-satisficing. © 2012 Society for Risk Analysis.
How to deal with climate change uncertainty in the planning of engineering systems
NASA Astrophysics Data System (ADS)
Spackova, Olga; Dittes, Beatrice; Straub, Daniel
2016-04-01
The effect of extreme events such as floods on the infrastructure and built environment is associated with significant uncertainties: These include the uncertain effect of climate change, uncertainty on extreme event frequency estimation due to limited historic data and imperfect models, and, not least, uncertainty on future socio-economic developments, which determine the damage potential. One option for dealing with these uncertainties is the use of adaptable (flexible) infrastructure that can easily be adjusted in the future without excessive costs. The challenge is in quantifying the value of adaptability and in finding the optimal sequence of decision. Is it worth to build a (potentially more expensive) adaptable system that can be adjusted in the future depending on the future conditions? Or is it more cost-effective to make a conservative design without counting with the possible future changes to the system? What is the optimal timing of the decision to build/adjust the system? We develop a quantitative decision-support framework for evaluation of alternative infrastructure designs under uncertainties, which: • probabilistically models the uncertain future (trough a Bayesian approach) • includes the adaptability of the systems (the costs of future changes) • takes into account the fact that future decisions will be made under uncertainty as well (using pre-posterior decision analysis) • allows to identify the optimal capacity and optimal timing to build/adjust the infrastructure. Application of the decision framework will be demonstrated on an example of flood mitigation planning in Bavaria.
Optimization as a Reasoning Strategy for Dealing with Socioscientific Decision-Making Situations
ERIC Educational Resources Information Center
Papadouris, Nicos
2012-01-01
This paper reports on an attempt to help 12-year-old students develop a specific optimization strategy for selecting among possible solutions in socioscientific decision-making situations. We have developed teaching and learning materials for elaborating this strategy, and we have implemented them in two intact classes (N = 48). Prior to and after…
Postoptimality analysis in the selection of technology portfolios
NASA Technical Reports Server (NTRS)
Adumitroaie, Virgil; Shelton, Kacie; Elfes, Alberto; Weisbin, Charles R.
2006-01-01
This paper describes an approach for qualifying optimal technology portfolios obtained with a multi-attribute decision support system. The goal is twofold: to gauge the degree of confidence in the optimal solution and to provide the decision-maker with an array of viable selection alternatives, which take into account input uncertainties and possibly satisfy non-technical constraints.
Postoptimality Analysis in the Selection of Technology Portfolios
NASA Technical Reports Server (NTRS)
Adumitroaie, Virgil; Shelton, Kacie; Elfes, Alberto; Weisbin, Charles R.
2006-01-01
This slide presentation reviews a process of postoptimally analysing the selection of technology portfolios. The rationale for the analysis stems from the need for consistent, transparent and auditable decision making processes and tools. The methodology is used to assure that project investments are selected through an optimization of net mission value. The main intent of the analysis is to gauge the degree of confidence in the optimal solution and to provide the decision maker with an array of viable selection alternatives which take into account input uncertainties and possibly satisfy non-technical constraints. A few examples of the analysis are reviewed. The goal of the postoptimality study is to enhance and improve the decision-making process by providing additional qualifications and substitutes to the optimal solution.
Neural signatures of experience-based improvements in deterministic decision-making.
Tremel, Joshua J; Laurent, Patryk A; Wolk, David A; Wheeler, Mark E; Fiez, Julie A
2016-12-15
Feedback about our choices is a crucial part of how we gather information and learn from our environment. It provides key information about decision experiences that can be used to optimize future choices. However, our understanding of the processes through which feedback translates into improved decision-making is lacking. Using neuroimaging (fMRI) and cognitive models of decision-making and learning, we examined the influence of feedback on multiple aspects of decision processes across learning. Subjects learned correct choices to a set of 50 word pairs across eight repetitions of a concurrent discrimination task. Behavioral measures were then analyzed with both a drift-diffusion model and a reinforcement learning model. Parameter values from each were then used as fMRI regressors to identify regions whose activity fluctuates with specific cognitive processes described by the models. The patterns of intersecting neural effects across models support two main inferences about the influence of feedback on decision-making. First, frontal, anterior insular, fusiform, and caudate nucleus regions behave like performance monitors, reflecting errors in performance predictions that signal the need for changes in control over decision-making. Second, temporoparietal, supplementary motor, and putamen regions behave like mnemonic storage sites, reflecting differences in learned item values that inform optimal decision choices. As information about optimal choices is accrued, these neural systems dynamically adjust, likely shifting the burden of decision processing from controlled performance monitoring to bottom-up, stimulus-driven choice selection. Collectively, the results provide a detailed perspective on the fundamental ability to use past experiences to improve future decisions. Copyright © 2016 Elsevier B.V. All rights reserved.
Neural signatures of experience-based improvements in deterministic decision-making
Tremel, Joshua J.; Laurent, Patryk A.; Wolk, David A.; Wheeler, Mark E.; Fiez, Julie A.
2016-01-01
Feedback about our choices is a crucial part of how we gather information and learn from our environment. It provides key information about decision experiences that can be used to optimize future choices. However, our understanding of the processes through which feedback translates into improved decision-making is lacking. Using neuroimaging (fMRI) and cognitive models of decision-making and learning, we examined the influence of feedback on multiple aspects of decision processes across learning. Subjects learned correct choices to a set of 50 word pairs across eight repetitions of a concurrent discrimination task. Behavioral measures were then analyzed with both a drift-diffusion model and a reinforcement learning model. Parameter values from each were then used as fMRI regressors to identify regions whose activity fluctuates with specific cognitive processes described by the models. The patterns of intersecting neural effects across models support two main inferences about the influence of feedback on decision-making. First, frontal, anterior insular, fusiform, and caudate nucleus regions behave like performance monitors, reflecting errors in performance predictions that signal the need for changes in control over decision-making. Second, temporoparietal, supplementary motor, and putamen regions behave like mnemonic storage sites, reflecting differences in learned item values that inform optimal decision choices. As information about optimal choices is accrued, these neural systems dynamically adjust, likely shifting the burden of decision processing from controlled performance monitoring to bottom-up, stimulus-driven choice selection. Collectively, the results provide a detailed perspective on the fundamental ability to use past experiences to improve future decisions. PMID:27523644
Optimal Limited Contingency Planning
NASA Technical Reports Server (NTRS)
Meuleau, Nicolas; Smith, David E.
2003-01-01
For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.
Expected value information improves financial risk taking across the adult life span.
Samanez-Larkin, Gregory R; Wagner, Anthony D; Knutson, Brian
2011-04-01
When making decisions, individuals must often compensate for cognitive limitations, particularly in the face of advanced age. Recent findings suggest that age-related variability in striatal activity may increase financial risk-taking mistakes in older adults. In two studies, we sought to further characterize neural contributions to optimal financial risk taking and to determine whether decision aids could improve financial risk taking. In Study 1, neuroimaging analyses revealed that individuals whose mesolimbic activation correlated with the expected value estimates of a rational actor made more optimal financial decisions. In Study 2, presentation of expected value information improved decision making in both younger and older adults, but the addition of a distracting secondary task had little impact on decision quality. Remarkably, provision of expected value information improved the performance of older adults to match that of younger adults at baseline. These findings are consistent with the notion that mesolimbic circuits play a critical role in optimal choice, and imply that providing simplified information about expected value may improve financial risk taking across the adult life span.
The Optimal Observation Problem applied to a rating curve estimation including the "cost-to-wait"
NASA Astrophysics Data System (ADS)
Raso, Luciano; Werner, Micha; Weijs, Steven
2013-04-01
In order to manage a system, a decision maker (DM) tries to make the best decision under uncertainty, having partial knowledge on the effects of his/her decision. Observations reduce uncertainty, but are costly. Deciding what to observe and when to stop observing is a complementary problem that the DM has to face. The Optimal Observation Problem (OOP) offers a solution to the questions: (1) which observation is more effective? And (2) Is the next observation worth its cost? We show an application of the OOP to a rating curve estimation in the White Carter River (Scotland). The cost of extra gauging is compensated by the value of better decisions, that reduce the costs due to floods. The observational decision is then whether to gauge, and when. In the application, we include the "cost-to-wait" in the cost structure. The Algorithm find thus an optimal trade-off between getting less informative data now or wait for more informative, but later. The OOP can be used to plan a measurement campaign, also taking into account that the rating curve can change.
NASA Astrophysics Data System (ADS)
Holmes, Philip; Eckhoff, Philip; Wong-Lin, K. F.; Bogacz, Rafal; Zacksenhouse, Miriam; Cohen, Jonathan D.
2010-03-01
We describe how drift-diffusion (DD) processes - systems familiar in physics - can be used to model evidence accumulation and decision-making in two-alternative, forced choice tasks. We sketch the derivation of these stochastic differential equations from biophysically-detailed models of spiking neurons. DD processes are also continuum limits of the sequential probability ratio test and are therefore optimal in the sense that they deliver decisions of specified accuracy in the shortest possible time. This leaves open the critical balance of accuracy and speed. Using the DD model, we derive a speed-accuracy tradeoff that optimizes reward rate for a simple perceptual decision task, compare human performance with this benchmark, and discuss possible reasons for prevalent sub-optimality, focussing on the question of uncertain estimates of key parameters. We present an alternative theory of robust decisions that allows for uncertainty, and show that its predictions provide better fits to experimental data than a more prevalent account that emphasises a commitment to accuracy. The article illustrates how mathematical models can illuminate the neural basis of cognitive processes.
Development of transportation asset management decision support tools : final report.
DOT National Transportation Integrated Search
2017-08-09
This study developed a web-based prototype decision support platform to demonstrate the benefits of transportation asset management in monitoring asset performance, supporting asset funding decisions, planning budget tradeoffs, and optimizing resourc...
Optimal condition sampling for a network of infrastructure facilities.
DOT National Transportation Integrated Search
2011-12-31
In response to the developments in inspection technologies, infrastructure decision-making methods evolved whereby the optimum combination of inspection decisions on the one hand and maintenance and rehabilitation decisions on the other are determine...
Method for oxygen reduction in a uranium-recovery process. [US DOE patent application
Hurst, F.J.; Brown, G.M.; Posey, F.A.
1981-11-04
An improvement in effecting uranium recovery from phosphoric acid solutions is provided by sparging dissolved oxygen contained in solutions and solvents used in a reductive stripping stage with an effective volume of a nonoxidizing gas before the introduction of the solutions and solvents into the stage. Effective volumes of nonoxidizing gases, selected from the group consisting of argon, carbon dioxide, carbon monoxide, helium, hydrogen, nitrogen, sulfur dioxide, and mixtures thereof, displace oxygen from the solutions and solvents thereby reduce deleterious effects of oxygen such as excessive consumption of elemental or ferrous iron and accumulation of complex iron phosphates or cruds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFreniere, Lorraine M.
In 2008-2009, to address the carbon tetrachloride contamination detected on its former property, the CCC/USDA implemented a source area cleanup in accord with the document Interim Measure Work Plan/Design for Agra, Kansas (IMWP/D; Argonne 2008). The cleanup involves five large-diameter boreholes (LDBs) coupled with soil vapor extraction (SVE) and air sparge (AS) systems. The work plan was approved by the Kansas Department of Health and Environment (KDHE) in November 2008 (KDHE 2008b), and operation began in May 2009.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFreniere, Lorraine
In 2008-2009, to address the carbon tetrachloride contamination detected on its former property, the CCC/USDA implemented a source area cleanup in accord with the document Interim Measure Work Plan/Design for Agra, Kansas (IMWP/D; Argonne 2008). The cleanup involves five large-diameter boreholes (LDBs) coupled with soil vapor extraction (SVE) and air sparging (AS). The work plan was approved by the Kansas Department of Health and Environment (KDHE) in November 2008 (KDHE 2008b), and operation began in May 2009.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFreniere, Lorraine M.
In 2008-2009, to address the carbon tetrachloride contamination detected on its former property, the CCC/USDA implemented a source area cleanup in accord with the document Interim Measure Work Plan/Design for Agra, Kansas (IMWP/D; Argonne 2008). The cleanup involves five large-diameter boreholes (LDBs) coupled with soil vapor extraction (SVE) and air sparge (AS) systems. The work plan was approved by the Kansas Department of Health and Environment (KDHE) in November 2008 (KDHE 2008b), and operation began in May 2009.
II. Electrodeposition/removal of nickel in a spouted electrochemical reactor.
Grimshaw, Pengpeng; Calo, Joseph M; Shirvanian, Pezhman A; Hradil, George
2011-08-17
An investigation is presented of nickel electrodeposition from acidic solutions in a cylindrical spouted electrochemical reactor. The effects of solution pH, temperature, and applied current on nickel removal/recovery rate, current efficiency, and corrosion rate of deposited nickel on the cathodic particles were explored under galvanostatic operation. Nitrogen sparging was used to decrease the dissolved oxygen concentration in the electrolyte in order to reduce the nickel corrosion rate, thereby increasing the nickel electrowinning rate and current efficiency. A numerical model of electrodeposition, including corrosion and mass transfer in the particulate cathode moving bed, is presented that describes the behavior of the experimental net nickel electrodeposition data quite well.
Decision-Making Strategies for College Students
ERIC Educational Resources Information Center
Morey, Janis T.; Dansereau, Donald F.
2010-01-01
College students' decision making is often less than optimal and sometimes leads to negative consequences. The effectiveness of two strategies for improving student decision making--node-link mapping and social perspective taking (SPT)--are examined. Participants using SPT were significantly better able to evaluate decision options and develop…
Liang, Jie; Zhong, Minzhou; Zeng, Guangming; Chen, Gaojie; Hua, Shanshan; Li, Xiaodong; Yuan, Yujie; Wu, Haipeng; Gao, Xiang
2017-02-01
Land-use change has direct impact on ecosystem services and alters ecosystem services values (ESVs). Ecosystem services analysis is beneficial for land management and decisions. However, the application of ESVs for decision-making in land use decisions is scarce. In this paper, a method, integrating ESVs to balance future ecosystem-service benefit and risk, is developed to optimize investment in land for ecological conservation in land use planning. Using ecological conservation in land use planning in Changsha as an example, ESVs is regarded as the expected ecosystem-service benefit. And uncertainty of land use change is regarded as risk. This method can optimize allocation of investment in land to improve ecological benefit. The result shows that investment should be partial to Liuyang City to get higher benefit. The investment should also be shifted from Liuyang City to other regions to reduce risk. In practice, lower limit and upper limit for weight distribution, which affects optimal outcome and selection of investment allocation, should be set in investment. This method can reveal the optimal spatial allocation of investment to maximize the expected ecosystem-service benefit at a given level of risk or minimize risk at a given level of expected ecosystem-service benefit. Our results of optimal analyses highlight tradeoffs between future ecosystem-service benefit and uncertainty of land use change in land use decisions. Copyright © 2016 Elsevier B.V. All rights reserved.
Why, When, and How to Take into Account the Uncertainty Involved in Career Decisions.
ERIC Educational Resources Information Center
Gati, Itamar
1990-01-01
Theoretically analyzes career decisions under uncertainty, when career decision maker ranks options rather than choosing best one. Found that how decisions were framed affected ranking of options and quality of decisions. Analysis showed that the rank order of options in optimal ranking always coincided with the rank order of the options by their…
Modeling human decision making behavior in supervisory control
NASA Technical Reports Server (NTRS)
Tulga, M. K.; Sheridan, T. B.
1977-01-01
An optimal decision control model was developed, which is based primarily on a dynamic programming algorithm which looks at all the available task possibilities, charts an optimal trajectory, and commits itself to do the first step (i.e., follow the optimal trajectory during the next time period), and then iterates the calculation. A Bayesian estimator was included which estimates the tasks which might occur in the immediate future and provides this information to the dynamic programming routine. Preliminary trials comparing the human subject's performance to that of the optimal model show a great similarity, but indicate that the human skips certain movements which require quick change in strategy.
Chen, Xudong; Xu, Zhongwen; Yao, Liming; Ma, Ning
2018-03-05
This study considers the two factors of environmental protection and economic benefits to address municipal sewage treatment. Based on considerations regarding the sewage treatment plant construction site, processing technology, capital investment, operation costs, water pollutant emissions, water quality and other indicators, we establish a general multi-objective decision model for optimizing municipal sewage treatment plant construction. Using the construction of a sewage treatment plant in a suburb of Chengdu as an example, this paper tests the general model of multi-objective decision-making for the sewage treatment plant construction by implementing a genetic algorithm. The results show the applicability and effectiveness of the multi-objective decision model for the sewage treatment plant. This paper provides decision and technical support for the optimization of municipal sewage treatment.
NASA Astrophysics Data System (ADS)
Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.
2015-08-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.
Time optimized path-choice in the termite hunting ant Megaponera analis.
Frank, Erik T; Hönle, Philipp O; Linsenmair, K Eduard
2018-05-10
Trail network systems among ants have received a lot of scientific attention due to their various applications in problem solving of networks. Recent studies have shown that ants select the fastest available path when facing different velocities on different substrates, rather than the shortest distance. The progress of decision-making by these ants is determined by pheromone-based maintenance of paths, which is a collective decision. However, path optimization through individual decision-making remains mostly unexplored. Here we present the first study of time-optimized path selection via individual decision-making by scout ants. Megaponera analis scouts search for termite foraging sites and lead highly organized raid columns to them. The path of the scout determines the path of the column. Through installation of artificial roads around M. analis nests we were able to influence the pathway choice of the raids. After road installation 59% of all recorded raids took place completely or partly on the road, instead of the direct, i.e. distance-optimized, path through grass from the nest to the termites. The raid velocity on the road was more than double the grass velocity, the detour thus saved 34.77±23.01% of the travel time compared to a hypothetical direct path. The pathway choice of the ants was similar to a mathematical model of least time allowing us to hypothesize the underlying mechanisms regulating the behavior. Our results highlight the importance of individual decision-making in the foraging behavior of ants and show a new procedure of pathway optimization. © 2018. Published by The Company of Biologists Ltd.
Hybrid algorithms for fuzzy reverse supply chain network design.
Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.
Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design
Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057
ERIC Educational Resources Information Center
Snowden, Jessica A.; Leon, Scott C.; Bryant, Fred B.; Lyons, John S.
2007-01-01
This study explored clinical and nonclinical predictors of inpatient hospital admission decisions across a sample of children in foster care over 4 years (N = 13,245). Forty-eight percent of participants were female and the mean age was 13.4 (SD = 3.5 years). Optimal data analysis (Yarnold & Soltysik, 2005) was used to construct a nonlinear…
Li, Hui; Wang, Chuanxu; Shang, Meng; Ou, Wei
2017-01-01
In this paper, we examine the influences of vertical and horizontal cooperation models on the optimal decisions and performance of a low-carbon closed-loop supply chain (CLSC) with a manufacturer and two retailers, and study optimal operation in the competitive pricing, competitive the low-carbon promotion, the carbon emission reduction, the used-products collection and the profits. We consider the completely decentralized model, M-R vertical cooperation model, R-R horizontal cooperation model, M-R-R vertical and horizontal cooperation model and completely centralized model, and also identify the optimal decision results and profits. It can be observed from a systematic comparison and numerical analysis that the completely centralized model is best in all optimal decision results among all models. In semi-cooperation, the M-R vertical cooperation model is positive, the R-R horizontal cooperation model is passive, and the positivity of the M-R-R vertical and horizontal cooperation model decreases with competitive intensity increasing in the used-products returning, carbon emissions reduction level, low-carbon promotion effort and the profits of the manufacturer and the entire supply chain. PMID:29104268
Li, Hui; Wang, Chuanxu; Shang, Meng; Ou, Wei
2017-11-01
In this paper, we examine the influences of vertical and horizontal cooperation models on the optimal decisions and performance of a low-carbon closed-loop supply chain (CLSC) with a manufacturer and two retailers, and study optimal operation in the competitive pricing, competitive the low-carbon promotion, the carbon emission reduction, the used-products collection and the profits. We consider the completely decentralized model, M-R vertical cooperation model, R-R horizontal cooperation model, M-R-R vertical and horizontal cooperation model and completely centralized model, and also identify the optimal decision results and profits. It can be observed from a systematic comparison and numerical analysis that the completely centralized model is best in all optimal decision results among all models. In semi-cooperation, the M-R vertical cooperation model is positive, the R-R horizontal cooperation model is passive, and the positivity of the M-R-R vertical and horizontal cooperation model decreases with competitive intensity increasing in the used-products returning, carbon emissions reduction level, low-carbon promotion effort and the profits of the manufacturer and the entire supply chain.
Ertefaie, Ashkan; Shortreed, Susan; Chakraborty, Bibhas
2016-06-15
Q-learning is a regression-based approach that uses longitudinal data to construct dynamic treatment regimes, which are sequences of decision rules that use patient information to inform future treatment decisions. An optimal dynamic treatment regime is composed of a sequence of decision rules that indicate how to optimally individualize treatment using the patients' baseline and time-varying characteristics to optimize the final outcome. Constructing optimal dynamic regimes using Q-learning depends heavily on the assumption that regression models at each decision point are correctly specified; yet model checking in the context of Q-learning has been largely overlooked in the current literature. In this article, we show that residual plots obtained from standard Q-learning models may fail to adequately check the quality of the model fit. We present a modified Q-learning procedure that accommodates residual analyses using standard tools. We present simulation studies showing the advantage of the proposed modification over standard Q-learning. We illustrate this new Q-learning approach using data collected from a sequential multiple assignment randomized trial of patients with schizophrenia. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Converse, Sarah J.; Shelley, Kevin J.; Morey, Steve; Chan, Jeffrey; LaTier, Andrea; Scafidi, Carolyn; Crouse, Deborah T.; Runge, Michael C.
2011-01-01
The resources available to support conservation work, whether time or money, are limited. Decision makers need methods to help them identify the optimal allocation of limited resources to meet conservation goals, and decision analysis is uniquely suited to assist with the development of such methods. In recent years, a number of case studies have been described that examine optimal conservation decisions under fiscal constraints; here we develop methods to look at other types of constraints, including limited staff and regulatory deadlines. In the US, Section Seven consultation, an important component of protection under the federal Endangered Species Act, requires that federal agencies overseeing projects consult with federal biologists to avoid jeopardizing species. A benefit of consultation is negotiation of project modifications that lessen impacts on species, so staff time allocated to consultation supports conservation. However, some offices have experienced declining staff, potentially reducing the efficacy of consultation. This is true of the US Fish and Wildlife Service's Washington Fish and Wildlife Office (WFWO) and its consultation work on federally-threatened bull trout (Salvelinus confluentus). To improve effectiveness, WFWO managers needed a tool to help allocate this work to maximize conservation benefits. We used a decision-analytic approach to score projects based on the value of staff time investment, and then identified an optimal decision rule for how scored projects would be allocated across bins, where projects in different bins received different time investments. We found that, given current staff, the optimal decision rule placed 80% of informal consultations (those where expected effects are beneficial, insignificant, or discountable) in a short bin where they would be completed without negotiating changes. The remaining 20% would be placed in a long bin, warranting an investment of seven days, including time for negotiation. For formal consultations (those where expected effects are significant), 82% of projects would be placed in a long bin, with an average time investment of 15. days. The WFWO is using this decision-support tool to help allocate staff time. Because workload allocation decisions are iterative, we describe a monitoring plan designed to increase the tool's efficacy over time. This work has general application beyond Section Seven consultation, in that it provides a framework for efficient investment of staff time in conservation when such time is limited and when regulatory deadlines prevent an unconstrained approach. ?? 2010.
Decision science and cervical cancer.
Cantor, Scott B; Fahs, Marianne C; Mandelblatt, Jeanne S; Myers, Evan R; Sanders, Gillian D
2003-11-01
Mathematical modeling is an effective tool for guiding cervical cancer screening, diagnosis, and treatment decisions for patients and policymakers. This article describes the use of mathematical modeling as outlined in five presentations from the Decision Science and Cervical Cancer session of the Second International Conference on Cervical Cancer held at The University of Texas M. D. Anderson Cancer Center, April 11-14, 2002. The authors provide an overview of mathematical modeling, especially decision analysis and cost-effectiveness analysis, and examples of how it can be used for clinical decision making regarding the prevention, diagnosis, and treatment of cervical cancer. Included are applications as well as theory regarding decision science and cervical cancer. Mathematical modeling can answer such questions as the optimal frequency for screening, the optimal age to stop screening, and the optimal way to diagnose cervical cancer. Results from one mathematical model demonstrated that a vaccine against high-risk strains of human papillomavirus was a cost-effective use of resources, and discussion of another model demonstrated the importance of collecting direct non-health care costs and time costs for cost-effectiveness analysis. Research presented indicated that care must be taken when applying the results of population-wide, cost-effectiveness analyses to reduce health disparities. Mathematical modeling can encompass a variety of theoretical and applied issues regarding decision science and cervical cancer. The ultimate objective of using decision-analytic and cost-effectiveness models is to identify ways to improve women's health at an economically reasonable cost. Copyright 2003 American Cancer Society.
Value of information and pricing new healthcare interventions.
Willan, Andrew R; Eckermann, Simon
2012-06-01
Previous application of value-of-information methods to optimal clinical trial design have predominantly taken a societal decision-making perspective, implicitly assuming that healthcare costs are covered through public expenditure and trial research is funded by government or donation-based philanthropic agencies. In this paper, we consider the interaction between interrelated perspectives of a societal decision maker (e.g. the National Institute for Health and Clinical Excellence [NICE] in the UK) charged with the responsibility for approving new health interventions for reimbursement and the company that holds the patent for a new intervention. We establish optimal decision making from societal and company perspectives, allowing for trade-offs between the value and cost of research and the price of the new intervention. Given the current level of evidence, there exists a maximum (threshold) price acceptable to the decision maker. Submission for approval with prices above this threshold will be refused. Given the current level of evidence and the decision maker's threshold price, there exists a minimum (threshold) price acceptable to the company. If the decision maker's threshold price exceeds the company's, then current evidence is sufficient since any price between the thresholds is acceptable to both. On the other hand, if the decision maker's threshold price is lower than the company's, then no price is acceptable to both and the company's optimal strategy is to commission additional research. The methods are illustrated using a recent example from the literature.
Interpretable Decision Sets: A Joint Framework for Description and Prediction
Lakkaraju, Himabindu; Bach, Stephen H.; Jure, Leskovec
2016-01-01
One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model’s prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency. PMID:27853627
Optimized model tuning in medical systems.
Kléma, Jirí; Kubalík, Jirí; Lhotská, Lenka
2005-12-01
In medical systems it is often advantageous to utilize specific problem situations (cases) in addition to or instead of a general model. Decisions are then based on relevant past cases retrieved from a case memory. The reliability of such decisions depends directly on the ability to identify cases of practical relevance to the current situation. This paper discusses issues of automated tuning in order to obtain a proper definition of mutual case similarity in a specific medical domain. The main focus is on a reasonably time-consuming optimization of the parameters that determine case retrieval and further utilization in decision making/ prediction. The two case studies - mortality prediction after cardiological intervention, and resource allocation at a spa - document that the optimization process is influenced by various characteristics of the problem domain.
Piéron’s Law and Optimal Behavior in Perceptual Decision-Making
van Maanen, Leendert; Grasman, Raoul P. P. P.; Forstmann, Birte U.; Wagenmakers, Eric-Jan
2012-01-01
Piéron’s Law is a psychophysical regularity in signal detection tasks that states that mean response times decrease as a power function of stimulus intensity. In this article, we extend Piéron’s Law to perceptual two-choice decision-making tasks, and demonstrate that the law holds as the discriminability between two competing choices is manipulated, even though the stimulus intensity remains constant. This result is consistent with predictions from a Bayesian ideal observer model. The model assumes that in order to respond optimally in a two-choice decision-making task, participants continually update the posterior probability of each response alternative, until the probability of one alternative crosses a criterion value. In addition to predictions for two-choice decision-making tasks, we extend the ideal observer model to predict Piéron’s Law in signal detection tasks. We conclude that Piéron’s Law is a general phenomenon that may be caused by optimality constraints. PMID:22232572
Micheyl, Christophe; Dai, Huanping
2010-01-01
The equal-variance Gaussian signal-detection-theory (SDT) decision model for the dual-pair change-detection (or “4IAX”) paradigm has been described in earlier publications. In this note, we consider the equal-variance Gaussian SDT model for the related dual-pair AB vs BA identification paradigm. The likelihood ratios, optimal decision rules, receiver operating characteristics (ROCs), and relationships between d' and proportion-correct (PC) are analyzed for two special cases: that of statistically independent observations, which is likely to apply in constant-stimuli experiments, and that of highly correlated observations, which is likely to apply in experiments where stimuli are roved widely across trials or pairs. A surprising outcome of this analysis is that although these two situations lead to different optimal decision rules, the predicted ROCs and proportions of correct responses (PCs) for these two cases are not substantially different, and are either identical or similar to those observed in the basic Yes-No paradigm. PMID:19633356
Aceituno, Felipe F.; Orellana, Marcelo; Torres, Jorge; Mendoza, Sebastián; Slater, Alex W.; Melo, Francisco
2012-01-01
Discrete additions of oxygen play a critical role in alcoholic fermentation. However, few studies have quantitated the fate of dissolved oxygen and its impact on wine yeast cell physiology under enological conditions. We simulated the range of dissolved oxygen concentrations that occur after a pump-over during the winemaking process by sparging nitrogen-limited continuous cultures with oxygen-nitrogen gaseous mixtures. When the dissolved oxygen concentration increased from 1.2 to 2.7 μM, yeast cells changed from a fully fermentative to a mixed respirofermentative metabolism. This transition is characterized by a switch in the operation of the tricarboxylic acid cycle (TCA) and an activation of NADH shuttling from the cytosol to mitochondria. Nevertheless, fermentative ethanol production remained the major cytosolic NADH sink under all oxygen conditions, suggesting that the limitation of mitochondrial NADH reoxidation is the major cause of the Crabtree effect. This is reinforced by the induction of several key respiratory genes by oxygen, despite the high sugar concentration, indicating that oxygen overrides glucose repression. Genes associated with other processes, such as proline uptake, cell wall remodeling, and oxidative stress, were also significantly affected by oxygen. The results of this study indicate that respiration is responsible for a substantial part of the oxygen response in yeast cells during alcoholic fermentation. This information will facilitate the development of temporal oxygen addition strategies to optimize yeast performance in industrial fermentations. PMID:23001663
Aceituno, Felipe F; Orellana, Marcelo; Torres, Jorge; Mendoza, Sebastián; Slater, Alex W; Melo, Francisco; Agosin, Eduardo
2012-12-01
Discrete additions of oxygen play a critical role in alcoholic fermentation. However, few studies have quantitated the fate of dissolved oxygen and its impact on wine yeast cell physiology under enological conditions. We simulated the range of dissolved oxygen concentrations that occur after a pump-over during the winemaking process by sparging nitrogen-limited continuous cultures with oxygen-nitrogen gaseous mixtures. When the dissolved oxygen concentration increased from 1.2 to 2.7 μM, yeast cells changed from a fully fermentative to a mixed respirofermentative metabolism. This transition is characterized by a switch in the operation of the tricarboxylic acid cycle (TCA) and an activation of NADH shuttling from the cytosol to mitochondria. Nevertheless, fermentative ethanol production remained the major cytosolic NADH sink under all oxygen conditions, suggesting that the limitation of mitochondrial NADH reoxidation is the major cause of the Crabtree effect. This is reinforced by the induction of several key respiratory genes by oxygen, despite the high sugar concentration, indicating that oxygen overrides glucose repression. Genes associated with other processes, such as proline uptake, cell wall remodeling, and oxidative stress, were also significantly affected by oxygen. The results of this study indicate that respiration is responsible for a substantial part of the oxygen response in yeast cells during alcoholic fermentation. This information will facilitate the development of temporal oxygen addition strategies to optimize yeast performance in industrial fermentations.
Mukhopadhyay, Biswarup; Johnson, Eric F.; Wolfe, Ralph S.
1999-01-01
For the hyperthermophilic and barophilic methanarchaeon Methanococcus jannaschii, we have developed a medium and protocols for reactor-scale cultivation that improved the final cell yield per liter from ∼0.5 to ∼7.5 g of packed wet cells (∼1.8 g dry cell mass) under autotrophic growth conditions and to ∼8.5 g of packed wet cells (∼2 g dry cell mass) with yeast extract (2 g liter−1) and tryptone (2 g liter−1) as medium supplements. For growth in a sealed bottle it was necessary to add Se to the medium, and a level of 2 μM for added Se gave the highest final cell yield. In a reactor M. jannaschii grew without added Se in the medium; it is plausible that the cells received Se as a contaminant from the reactor vessel and the H2S supply. But, for the optimal performance of a reactor culture, an addition of Se to a final concentration of 50 to 100 μM was needed. Also, cell growth in a reactor culture was inhibited at much higher Se concentrations. These observations and the data from previous work with methanogen cell extracts (B. C. McBride and R. S. Wolfe, Biochemistry 10:4312–4317, 1971) suggested that from a continuously sparged reactor culture Se was lost in the exhaust gas as volatile selenides, and this loss raised the apparent required level of and tolerance for Se. In spite of having a proteinaceous cell wall, M. jannaschii withstood an impeller tip speed of 235.5 cms−1, which was optimal for achieving high cell density and also was the higher limit for the tolerated shear rate. The organism secreted one or more acidic compounds, which lowered pH in cultures without pH control; this secretion continued even after cessation of growth. PMID:10543823
Distinct Roles of Dopamine and Subthalamic Nucleus in Learning and Probabilistic Decision Making
ERIC Educational Resources Information Center
Coulthard, Elizabeth J.; Bogacz, Rafal; Javed, Shazia; Mooney, Lucy K.; Murphy, Gillian; Keeley, Sophie; Whone, Alan L.
2012-01-01
Even simple behaviour requires us to make decisions based on combining multiple pieces of learned and new information. Making such decisions requires both learning the optimal response to each given stimulus as well as combining probabilistic information from multiple stimuli before selecting a response. Computational theories of decision making…
Optimal throughput for cognitive radio with energy harvesting in fading wireless channel.
Vu-Van, Hiep; Koo, Insoo
2014-01-01
Energy resource management is a crucial problem of a device with a finite capacity battery. In this paper, cognitive radio is considered to be a device with an energy harvester that can harvest energy from a non-RF energy resource while performing other actions of cognitive radio. Harvested energy will be stored in a finite capacity battery. At the start of the time slot of cognitive radio, the radio needs to determine if it should remain silent or carry out spectrum sensing based on the idle probability of the primary user and the remaining energy in order to maximize the throughput of the cognitive radio system. In addition, optimal sensing energy and adaptive transmission power control are also investigated in this paper to effectively utilize the limited energy of cognitive radio. Finding an optimal approach is formulated as a partially observable Markov decision process. The simulation results show that the proposed optimal decision scheme outperforms the myopic scheme in which current throughput is only considered when making a decision.
Integrated strategic and tactical biomass-biofuel supply chain optimization.
Lin, Tao; Rodríguez, Luis F; Shastri, Yogendra N; Hansen, Alan C; Ting, K C
2014-03-01
To ensure effective biomass feedstock provision for large-scale biofuel production, an integrated biomass supply chain optimization model was developed to minimize annual biomass-ethanol production costs by optimizing both strategic and tactical planning decisions simultaneously. The mixed integer linear programming model optimizes the activities range from biomass harvesting, packing, in-field transportation, stacking, transportation, preprocessing, and storage, to ethanol production and distribution. The numbers, locations, and capacities of facilities as well as biomass and ethanol distribution patterns are key strategic decisions; while biomass production, delivery, and operating schedules and inventory monitoring are key tactical decisions. The model was implemented to study Miscanthus-ethanol supply chain in Illinois. The base case results showed unit Miscanthus-ethanol production costs were $0.72L(-1) of ethanol. Biorefinery related costs accounts for 62% of the total costs, followed by biomass procurement costs. Sensitivity analysis showed that a 50% reduction in biomass yield would increase unit production costs by 11%. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khalilpourazari, Soheyl; Khalilpourazary, Saman
2017-05-01
In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.
Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D
2009-12-01
The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.
Durham, Erin-Elizabeth A; Yu, Xiaxia; Harrison, Robert W
2014-12-01
Effective machine-learning handles large datasets efficiently. One key feature of handling large data is the use of databases such as MySQL. The freeware fuzzy decision tree induction tool, FDT, is a scalable supervised-classification software tool implementing fuzzy decision trees. It is based on an optimized fuzzy ID3 (FID3) algorithm. FDT 2.0 improves upon FDT 1.0 by bridging the gap between data science and data engineering: it combines a robust decisioning tool with data retention for future decisions, so that the tool does not need to be recalibrated from scratch every time a new decision is required. In this paper we briefly review the analytical capabilities of the freeware FDT tool and its major features and functionalities; examples of large biological datasets from HIV, microRNAs and sRNAs are included. This work shows how to integrate fuzzy decision algorithms with modern database technology. In addition, we show that integrating the fuzzy decision tree induction tool with database storage allows for optimal user satisfaction in today's Data Analytics world.
Neurons in the Frontal Lobe Encode the Value of Multiple Decision Variables
Kennerley, Steven W.; Dahmubed, Aspandiar F.; Lara, Antonio H.; Wallis, Jonathan D.
2009-01-01
A central question in behavioral science is how we select among choice alternatives to obtain consistently the most beneficial outcomes. Three variables are particularly important when making a decision: the potential payoff, the probability of success, and the cost in terms of time and effort. A key brain region in decision making is the frontal cortex as damage here impairs the ability to make optimal choices across a range of decision types. We simultaneously recorded the activity of multiple single neurons in the frontal cortex while subjects made choices involving the three aforementioned decision variables. This enabled us to contrast the relative contribution of the anterior cingulate cortex (ACC), the orbito-frontal cortex, and the lateral prefrontal cortex to the decision-making process. Neurons in all three areas encoded value relating to choices involving probability, payoff, or cost manipulations. However, the most significant signals were in the ACC, where neurons encoded multiplexed representations of the three different decision variables. This supports the notion that the ACC is an important component of the neural circuitry underlying optimal decision making. PMID:18752411
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jantzen, C.; Johnson, F.
2012-06-05
During melting of HLW glass, the REDOX of the melt pool cannot be measured. Therefore, the Fe{sup +2}/{Sigma}Fe ratio in the glass poured from the melter must be related to melter feed organic and oxidant concentrations to ensure production of a high quality glass without impacting production rate (e.g., foaming) or melter life (e.g., metal formation and accumulation). A production facility such as the Defense Waste Processing Facility (DWPF) cannot wait until the melt or waste glass has been made to assess its acceptability, since by then no further changes to the glass composition and acceptability are possible. therefore, themore » acceptability decision is made on the upstream process, rather than on the downstream melt or glass product. That is, it is based on 'feed foward' statistical process control (SPC) rather than statistical quality control (SQC). In SPC, the feed composition to the melter is controlled prior to vitrification. Use of the DWPF REDOX model has controlled the balanjce of feed reductants and oxidants in the Sludge Receipt and Adjustment Tank (SRAT). Once the alkali/alkaline earth salts (both reduced and oxidized) are formed during reflux in the SRAT, the REDOX can only change if (1) additional reductants or oxidants are added to the SRAT, the Slurry Mix Evaporator (SME), or the Melter Feed Tank (MFT) or (2) if the melt pool is bubble dwith an oxidizing gas or sparging gas that imposes a different REDOX target than the chemical balance set during reflux in the SRAT.« less
DECISION-MAKING IN THE SCHOOLS: AN OUTSIDER’S VIEW,
DECISION MAKING , EDUCATION), (*EDUCATION, MANAGEMENT PLANNING AND CONTROL), (*MANAGEMENT PLANNING AND CONTROL, EDUCATION), BUDGETS, MANAGEMENT ENGINEERING, PERSONNEL MANAGEMENT, STUDENTS, LEARNING, OPTIMIZATION
Kassa, Semu Mitiku
2018-02-01
Funds from various global organizations, such as, The Global Fund, The World Bank, etc. are not directly distributed to the targeted risk groups. Especially in the so-called third-world-countries, the major part of the fund in HIV prevention programs comes from these global funding organizations. The allocations of these funds usually pass through several levels of decision making bodies that have their own specific parameters to control and specific objectives to achieve. However, these decisions are made mostly in a heuristic manner and this may lead to a non-optimal allocation of the scarce resources. In this paper, a hierarchical mathematical optimization model is proposed to solve such a problem. Combining existing epidemiological models with the kind of interventions being on practice, a 3-level hierarchical decision making model in optimally allocating such resources has been developed and analyzed. When the impact of antiretroviral therapy (ART) is included in the model, it has been shown that the objective function of the lower level decision making structure is a non-convex minimization problem in the allocation variables even if all the production functions for the intervention programs are assumed to be linear.
Solving multi-objective optimization problems in conservation with the reference point method
Dujardin, Yann; Chadès, Iadine
2018-01-01
Managing the biodiversity extinction crisis requires wise decision-making processes able to account for the limited resources available. In most decision problems in conservation biology, several conflicting objectives have to be taken into account. Most methods used in conservation either provide suboptimal solutions or use strong assumptions about the decision-maker’s preferences. Our paper reviews some of the existing approaches to solve multi-objective decision problems and presents new multi-objective linear programming formulations of two multi-objective optimization problems in conservation, allowing the use of a reference point approach. Reference point approaches solve multi-objective optimization problems by interactively representing the preferences of the decision-maker with a point in the criteria (objectives) space, called the reference point. We modelled and solved the following two problems in conservation: a dynamic multi-species management problem under uncertainty and a spatial allocation resource management problem. Results show that the reference point method outperforms classic methods while illustrating the use of an interactive methodology for solving combinatorial problems with multiple objectives. The method is general and can be adapted to a wide range of ecological combinatorial problems. PMID:29293650
Base norms and discrimination of generalized quantum channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenčová, A.
2014-02-15
We introduce and study norms in the space of hermitian matrices, obtained from base norms in positively generated subspaces. These norms are closely related to discrimination of so-called generalized quantum channels, including quantum states, channels, and networks. We further introduce generalized quantum decision problems and show that the maximal average payoffs of decision procedures are again given by these norms. We also study optimality of decision procedures, in particular, we obtain a necessary and sufficient condition under which an optimal 1-tester for discrimination of quantum channels exists, such that the input state is maximally entangled.
Automatically updating predictive modeling workflows support decision-making in drug design.
Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O
2016-09-01
Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.
NASA Astrophysics Data System (ADS)
ShiouWei, L.
2014-12-01
Reservoirs are the most important water resources facilities in Taiwan.However,due to the steep slope and fragile geological conditions in the mountain area,storm events usually cause serious debris flow and flood,and the flood then will flush large amount of sediment into reservoirs.The sedimentation caused by flood has great impact on the reservoirs life.Hence,how to operate a reservoir during flood events to increase the efficiency of sediment desilting without risk the reservoir safety and impact the water supply afterward is a crucial issue in Taiwan. Therefore,this study developed a novel optimization planning model for reservoir flood operation considering flood control and sediment desilting,and proposed easy to use operating rules represented by decision trees.The decision trees rules have considered flood mitigation,water supply and sediment desilting.The optimal planning model computes the optimal reservoir release for each flood event that minimum water supply impact and maximum sediment desilting without risk the reservoir safety.Beside the optimal flood operation planning model,this study also proposed decision tree based flood operating rules that were trained by the multiple optimal reservoir releases to synthesis flood scenarios.The synthesis flood scenarios consists of various synthesis storm events,reservoir's initial storage and target storages at the end of flood operating. Comparing the results operated by the decision tree operation rules(DTOR) with that by historical operation for Krosa Typhoon in 2007,the DTOR removed sediment 15.4% more than that of historical operation with reservoir storage only8.38×106m3 less than that of historical operation.For Jangmi Typhoon in 2008,the DTOR removed sediment 24.4% more than that of historical operation with reservoir storage only 7.58×106m3 less than that of historical operation.The results show that the proposed DTOR model can increase the sediment desilting efficiency and extend the reservoir life.
Operations research investigations of satellite power stations
NASA Technical Reports Server (NTRS)
Cole, J. W.; Ballard, J. L.
1976-01-01
A systems model reflecting the design concepts of Satellite Power Stations (SPS) was developed. The model is of sufficient scope to include the interrelationships of the following major design parameters: the transportation to and between orbits; assembly of the SPS; and maintenance of the SPS. The systems model is composed of a set of equations that are nonlinear with respect to the system parameters and decision variables. The model determines a figure of merit from which alternative concepts concerning transportation, assembly, and maintenance of satellite power stations are studied. A hybrid optimization model was developed to optimize the system's decision variables. The optimization model consists of a random search procedure and the optimal-steepest descent method. A FORTRAN computer program was developed to enable the user to optimize nonlinear functions using the model. Specifically, the computer program was used to optimize Satellite Power Station system components.
NASA Astrophysics Data System (ADS)
Masuda, Kazuaki; Aiyoshi, Eitaro
We propose a method for solving optimal price decision problems for simultaneous multi-article auctions. An auction problem, originally formulated as a combinatorial problem, determines both every seller's whether or not to sell his/her article and every buyer's which article(s) to buy, so that the total utility of buyers and sellers will be maximized. Due to the duality theory, we transform it equivalently into a dual problem in which Lagrange multipliers are interpreted as articles' transaction price. As the dual problem is a continuous optimization problem with respect to the multipliers (i.e., the transaction prices), we propose a numerical method to solve it by applying heuristic global search methods. In this paper, Particle Swarm Optimization (PSO) is used to solve the dual problem, and experimental results are presented to show the validity of the proposed method.
Mokeddem, Diab; Khellaf, Abdelhafid
2009-01-01
Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples.
NASA Astrophysics Data System (ADS)
Hirsch, Piotr; Duzinkiewicz, Kazimierz; Grochowski, Michał
2017-11-01
District Heating (DH) systems are commonly supplied using local heat sources. Nowadays, modern insulation materials allow for effective and economically viable heat transportation over long distances (over 20 km). In the paper a method for optimized selection of design and operating parameters of long distance Heat Transportation System (HTS) is proposed. The method allows for evaluation of feasibility and effectivity of heat transportation from the considered heat sources. The optimized selection is formulated as multicriteria decision-making problem. The constraints for this problem include a static HTS model, allowing considerations of system life cycle, time variability and spatial topology. Thereby, variation of heat demand and ground temperature within the DH area, insulation and pipe aging and/or terrain elevation profile are taken into account in the decision-making process. The HTS construction costs, pumping power, and heat losses are considered as objective functions. Inner pipe diameter, insulation thickness, temperatures and pumping stations locations are optimized during the decision-making process. Moreover, the variants of pipe-laying e.g. one pipeline with the larger diameter or two with the smaller might be considered during the optimization. The analyzed optimization problem is multicriteria, hybrid and nonlinear. Because of such problem properties, the genetic solver was applied.
Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.
Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144
The impact of uncertainty on optimal emission policies
NASA Astrophysics Data System (ADS)
Botta, Nicola; Jansson, Patrik; Ionescu, Cezar
2018-05-01
We apply a computational framework for specifying and solving sequential decision problems to study the impact of three kinds of uncertainties on optimal emission policies in a stylized sequential emission problem.We find that uncertainties about the implementability of decisions on emission reductions (or increases) have a greater impact on optimal policies than uncertainties about the availability of effective emission reduction technologies and uncertainties about the implications of trespassing critical cumulated emission thresholds. The results show that uncertainties about the implementability of decisions on emission reductions (or increases) call for more precautionary policies. In other words, delaying emission reductions to the point in time when effective technologies will become available is suboptimal when these uncertainties are accounted for rigorously. By contrast, uncertainties about the implications of exceeding critical cumulated emission thresholds tend to make early emission reductions less rewarding.
Multi - party Game Analysis of Coal Industry and Industry Regulation Policy Optimization
NASA Astrophysics Data System (ADS)
Jiang, Tianqi
2018-01-01
In the face of the frequent occurrence of coal mine safety accidents, this paper analyses the relationship between central and local governments, coal mining enterprises and miners from the perspective of multi - group game. In the actual production, the decision of one of the three groups can affect the game strategy of the other of the three, so we should assume the corresponding game order. In this order, the game analysis of the income and decision of the three is carried out, and the game decision of the government, the enterprise and the workers is obtained through the establishment of the benefit matrix and so on. And then on the existing system to optimize the coal industry regulation proposed practical recommendations to reduce the frequency of industry safety accidents, optimize the industry production environment.
Nonstationary decision model for flood risk decision scaling
NASA Astrophysics Data System (ADS)
Spence, Caitlin M.; Brown, Casey M.
2016-11-01
Hydroclimatic stationarity is increasingly questioned as a default assumption in flood risk management (FRM), but successor methods are not yet established. Some potential successors depend on estimates of future flood quantiles, but methods for estimating future design storms are subject to high levels of uncertainty. Here we apply a Nonstationary Decision Model (NDM) to flood risk planning within the decision scaling framework. The NDM combines a nonstationary probability distribution of annual peak flow with optimal selection of flood management alternatives using robustness measures. The NDM incorporates structural and nonstructural FRM interventions and valuation of flows supporting ecosystem services to calculate expected cost of a given FRM strategy. A search for the minimum-cost strategy under incrementally varied representative scenarios extending across the plausible range of flood trend and value of the natural flow regime discovers candidate FRM strategies that are evaluated and compared through a decision scaling analysis (DSA). The DSA selects a management strategy that is optimal or close to optimal across the broadest range of scenarios or across the set of scenarios deemed most likely to occur according to estimates of future flood hazard. We illustrate the decision framework using a stylized example flood management decision based on the Iowa City flood management system, which has experienced recent unprecedented high flow episodes. The DSA indicates a preference for combining infrastructural and nonstructural adaptation measures to manage flood risk and makes clear that options-based approaches cannot be assumed to be "no" or "low regret."
NASA program decisions using reliability analysis.
NASA Technical Reports Server (NTRS)
Steinberg, A.
1972-01-01
NASA made use of the analytical outputs of reliability people to make management decisions on the Apollo program. Such decisions affected the amount of the incentive fees, how much acceptance testing was necessary, how to optimize development testing, whether to approve engineering changes, and certification of flight readiness. Examples of such analysis are discussed and related to programmatic decisions.-
Understanding Optimal Military Decision Making: Year 2 Progress Report
2014-01-01
measures. ARMY RELEVANCY AND MILITARY APPLICATION AREAS Objectively defining, measuring, and developing a means to assess military optimal decision making...has the potential to enhance training and refine procedures supporting more efficient learning and task accomplishment. Through the application of...26.79 (12.39) 7.94 (62.38) N/A = Not applicable ; as it is not possible to calculate this particular variable. Table 2. Descriptive statistics of
Estimation of power lithium-ion battery SOC based on fuzzy optimal decision
NASA Astrophysics Data System (ADS)
He, Dongmei; Hou, Enguang; Qiao, Xin; Liu, Guangmin
2018-06-01
In order to improve vehicle performance and safety, need to accurately estimate the power lithium battery state of charge (SOC), analyzing the common SOC estimation methods, according to the characteristics open circuit voltage and Kalman filter algorithm, using T - S fuzzy model, established a lithium battery SOC estimation method based on the fuzzy optimal decision. Simulation results show that the battery model accuracy can be improved.
Optimal Lease Contract for Remanufactured Equipment
NASA Astrophysics Data System (ADS)
Iskandar, B. P.; Wangsaputra, R.; Pasaribu, U. S.; Husniah, H.
2018-03-01
In the last two decades, the business of lease products (or equipment) has grown significantly, and many companies acquire equipment through leasing. In this paper, we propose a new lease contract under which a product (or equipment) is leased for a period of time with maximum usage per period (e.g. 1 year). This lease contract has only a time limit but no usage limit. If the total usage per period exceeds the maximum usage allowed in the contract, then the customer (as a lessee) will be charged an additional cost. In general, the lessor (OEM) provides a full coverage of maintenance, which includes PM and CM under the lease contract. It is considered that the lessor offers the lease contract for a remanufactured product. We presume that the price of the lease contract for the remanufactured product is much lower than that of a new one, and hence it would be a more attractive option to the customer. The decision problem for the lessee is to select the best option offered that fits to its requirement, and the decision problem for the lessor is find the optimal maintenance efforts for a given price of the lease option offered. We first find the optimal decisions independently for each party, and then the joint optimal decisions for both parties.
NASA Astrophysics Data System (ADS)
Sahraei, S.; Asadzadeh, M.
2017-12-01
Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.
Optimal joint detection and estimation that maximizes ROC-type curves
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.
2017-01-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
NASA Astrophysics Data System (ADS)
Goienetxea Uriarte, A.; Ruiz Zúñiga, E.; Urenda Moris, M.; Ng, A. H. C.
2015-05-01
Discrete Event Simulation (DES) is nowadays widely used to support decision makers in system analysis and improvement. However, the use of simulation for improving stochastic logistic processes is not common among healthcare providers. The process of improving healthcare systems involves the necessity to deal with trade-off optimal solutions that take into consideration a multiple number of variables and objectives. Complementing DES with Multi-Objective Optimization (SMO) creates a superior base for finding these solutions and in consequence, facilitates the decision-making process. This paper presents how SMO has been applied for system improvement analysis in a Swedish Emergency Department (ED). A significant number of input variables, constraints and objectives were considered when defining the optimization problem. As a result of the project, the decision makers were provided with a range of optimal solutions which reduces considerably the length of stay and waiting times for the ED patients. SMO has proved to be an appropriate technique to support healthcare system design and improvement processes. A key factor for the success of this project has been the involvement and engagement of the stakeholders during the whole process.
Li, Shuangyan; Li, Xialian; Zhang, Dezhi; Zhou, Lingyun
2017-01-01
This study develops an optimization model to integrate facility location and inventory control for a three-level distribution network consisting of a supplier, multiple distribution centers (DCs), and multiple retailers. The integrated model addressed in this study simultaneously determines three types of decisions: (1) facility location (optimal number, location, and size of DCs); (2) allocation (assignment of suppliers to located DCs and retailers to located DCs, and corresponding optimal transport mode choices); and (3) inventory control decisions on order quantities, reorder points, and amount of safety stock at each retailer and opened DC. A mixed-integer programming model is presented, which considers the carbon emission taxes, multiple transport modes, stochastic demand, and replenishment lead time. The goal is to minimize the total cost, which covers the fixed costs of logistics facilities, inventory, transportation, and CO2 emission tax charges. The aforementioned optimal model was solved using commercial software LINGO 11. A numerical example is provided to illustrate the applications of the proposed model. The findings show that carbon emission taxes can significantly affect the supply chain structure, inventory level, and carbon emission reduction levels. The delay rate directly affects the replenishment decision of a retailer.
NASA Astrophysics Data System (ADS)
Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.
2015-04-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.
Pedersen, Kine; Sørbye, Sveinung Wergeland; Burger, Emily Annika; Lönnberg, Stefan; Kristiansen, Ivar Sønbø
2015-12-01
Decision makers often need to simultaneously consider multiple criteria or outcomes when deciding whether to adopt new health interventions. Using decision analysis within the context of cervical cancer screening in Norway, we aimed to aid decision makers in identifying a subset of relevant strategies that are simultaneously efficient, feasible, and optimal. We developed an age-stratified probabilistic decision tree model following a cohort of women attending primary screening through one screening round. We enumerated detected precancers (i.e., cervical intraepithelial neoplasia of grade 2 or more severe (CIN2+)), colposcopies performed, and monetary costs associated with 10 alternative triage algorithms for women with abnormal cytology results. As efficiency metrics, we calculated incremental cost-effectiveness, and harm-benefit, ratios, defined as the additional costs, or the additional number of colposcopies, per additional CIN2+ detected. We estimated capacity requirements and uncertainty surrounding which strategy is optimal according to the decision rule, involving willingness to pay (monetary or resources consumed per added benefit). For ages 25 to 33 years, we eliminated four strategies that did not fall on either efficiency frontier, while one strategy was efficient with respect to both efficiency metrics. Compared with current practice in Norway, two strategies detected more precancers at lower monetary costs, but some required more colposcopies. Similar results were found for women aged 34 to 69 years. Improving the effectiveness and efficiency of cervical cancer screening may necessitate additional resources. Although efficient and feasible, both society and individuals must specify their willingness to accept the additional resources and perceived harms required to increase effectiveness before a strategy can be considered optimal. Copyright © 2015. Published by Elsevier Inc.
A normative inference approach for optimal sample sizes in decisions from experience
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Zeeb, Fiona D; Baarendse, P J J; Vanderschuren, L J M J; Winstanley, Catharine A
2015-12-01
Studies employing the Iowa Gambling Task (IGT) demonstrated that areas of the frontal cortex, including the ventromedial prefrontal cortex, orbitofrontal cortex (OFC), dorsolateral prefrontal cortex, and anterior cingulate cortex (ACC), are involved in the decision-making process. However, the precise role of these regions in maintaining optimal choice is not clear. We used the rat gambling task (rGT), a rodent analogue of the IGT, to determine whether inactivation of or altered dopamine signalling within discrete cortical sub-regions disrupts decision-making. Following training on the rGT, animals were implanted with guide cannulae aimed at the prelimbic (PrL) or infralimbic (IL) cortices, the OFC, or the ACC. Prior to testing, rats received an infusion of saline or a combination of baclofen and muscimol (0.125 μg of each/side) to inactivate the region and an infusion of a dopamine D2 receptor antagonist (0, 0.1, 0.3, and 1.0 μg/side). Rats tended to increase their choice of a disadvantageous option and decrease their choice of the optimal option following inactivation of either the IL or PrL cortex. In contrast, OFC or ACC inactivation did not affect decision-making. Infusion of a dopamine D2 receptor antagonist into any sub-region did not alter choice preference. Online activity of the IL or PrL cortex is important for maintaining an optimal decision-making strategy, but optimal performance on the rGT does not require frontal cortex dopamine D2 receptor activation. Additionally, these results demonstrate that the roles of different cortical regions in cost-benefit decision-making may be dissociated using the rGT.
Medical Problem-Solving: A Critique of the Literature.
ERIC Educational Resources Information Center
McGuire, Christine H.
1985-01-01
Prescriptive, decision-analysis of medical problem-solving has been based on decision theory that involves calculation and manipulation of complex probability and utility values to arrive at optimal decisions that will maximize patient benefits. The studies offer a methodology for improving clinical judgment. (Author/MLW)
Sendi, Pedram; Al, Maiwenn J; Gafni, Amiram; Birch, Stephen
2004-05-01
Bridges and Terris (Soc. Sci. Med. (2004)) critique our paper on the alternative decision rule of economic evaluation in the presence of uncertainty and constrained resources within the context of a portfolio of health care programs (Sendi et al. Soc. Sci. Med. 57 (2003) 2207). They argue that by not adopting a formal portfolio theory approach we overlook the optimal solution. We show that these arguments stem from a fundamental misunderstanding of the alternative decision rule of economic evaluation. In particular, the portfolio theory approach advocated by Bridges and Terris is based on the same theoretical assumptions that the alternative decision rule set out to relax. Moreover, Bridges and Terris acknowledge that the proposed portfolio theory approach may not identify the optimal solution to resource allocation problems. Hence, it provides neither theoretical nor practical improvements to the proposed alternative decision rule.
Reniers, G L L; Audenaert, A; Pauwels, N; Soudan, K
2011-02-15
This article empirically assesses and validates a methodology to make evacuation decisions in case of major fire accidents in chemical clusters. In this paper, a number of empirical results are presented, processed and discussed with respect to the implications and management of evacuation decisions in chemical companies. It has been shown in this article that in realistic industrial settings, suboptimal interventions may result in case the prospect to obtain additional information at later stages of the decision process is ignored. Empirical results also show that implications of interventions, as well as the required time and workforce to complete particular shutdown activities, may be very different from one company to another. Therefore, to be optimal from an economic viewpoint, it is essential that precautionary evacuation decisions are tailor-made per company. Copyright © 2010 Elsevier B.V. All rights reserved.
Who Chokes Under Pressure? The Big Five Personality Traits and Decision-Making under Pressure.
Byrne, Kaileigh A; Silasi-Mansat, Crina D; Worthy, Darrell A
2015-02-01
The purpose of the present study was to examine whether the Big Five personality factors could predict who thrives or chokes under pressure during decision-making. The effects of the Big Five personality factors on decision-making ability and performance under social (Experiment 1) and combined social and time pressure (Experiment 2) were examined using the Big Five Personality Inventory and a dynamic decision-making task that required participants to learn an optimal strategy. In Experiment 1, a hierarchical multiple regression analysis showed an interaction between neuroticism and pressure condition. Neuroticism negatively predicted performance under social pressure, but did not affect decision-making under low pressure. Additionally, the negative effect of neuroticism under pressure was replicated using a combined social and time pressure manipulation in Experiment 2. These results support distraction theory whereby pressure taxes highly neurotic individuals' cognitive resources, leading to sub-optimal performance. Agreeableness also negatively predicted performance in both experiments.
Removal of mercury from coal via a microbial pretreatment process
Borole, Abhijeet P [Knoxville, TN; Hamilton, Choo Y [Knoxville, TN
2011-08-16
A process for the removal of mercury from coal prior to combustion is disclosed. The process is based on use of microorganisms to oxidize iron, sulfur and other species binding mercury within the coal, followed by volatilization of mercury by the microorganisms. The microorganisms are from a class of iron and/or sulfur oxidizing bacteria. The process involves contacting coal with the bacteria in a batch or continuous manner. The mercury is first solubilized from the coal, followed by microbial reduction to elemental mercury, which is stripped off by sparging gas and captured by a mercury recovery unit, giving mercury-free coal. The mercury can be recovered in pure form from the sorbents via additional processing.
II. Electrodeposition/removal of nickel in a spouted electrochemical reactor
Grimshaw, Pengpeng; Calo, Joseph M.; Shirvanian, Pezhman A.; Hradil, George
2011-01-01
An investigation is presented of nickel electrodeposition from acidic solutions in a cylindrical spouted electrochemical reactor. The effects of solution pH, temperature, and applied current on nickel removal/recovery rate, current efficiency, and corrosion rate of deposited nickel on the cathodic particles were explored under galvanostatic operation. Nitrogen sparging was used to decrease the dissolved oxygen concentration in the electrolyte in order to reduce the nickel corrosion rate, thereby increasing the nickel electrowinning rate and current efficiency. A numerical model of electrodeposition, including corrosion and mass transfer in the particulate cathode moving bed, is presented that describes the behavior of the experimental net nickel electrodeposition data quite well. PMID:22039317
Hozo, Iztok; Schell, Michael J; Djulbegovic, Benjamin
2008-07-01
The absolute truth in research is unobtainable, as no evidence or research hypothesis is ever 100% conclusive. Therefore, all data and inferences can in principle be considered as "inconclusive." Scientific inference and decision-making need to take into account errors, which are unavoidable in the research enterprise. The errors can occur at the level of conclusions that aim to discern the truthfulness of research hypothesis based on the accuracy of research evidence and hypothesis, and decisions, the goal of which is to enable optimal decision-making under present and specific circumstances. To optimize the chance of both correct conclusions and correct decisions, the synthesis of all major statistical approaches to clinical research is needed. The integration of these approaches (frequentist, Bayesian, and decision-analytic) can be accomplished through formal risk:benefit (R:B) analysis. This chapter illustrates the rational choice of a research hypothesis using R:B analysis based on decision-theoretic expected utility theory framework and the concept of "acceptable regret" to calculate the threshold probability of the "truth" above which the benefit of accepting a research hypothesis outweighs its risks.
Minciardi, Riccardo; Paolucci, Massimo; Robba, Michela; Sacile, Roberto
2008-11-01
An approach to sustainable municipal solid waste (MSW) management is presented, with the aim of supporting the decision on the optimal flows of solid waste sent to landfill, recycling and different types of treatment plants, whose sizes are also decision variables. This problem is modeled with a non-linear, multi-objective formulation. Specifically, four objectives to be minimized have been taken into account, which are related to economic costs, unrecycled waste, sanitary landfill disposal and environmental impact (incinerator emissions). An interactive reference point procedure has been developed to support decision making; these methods are considered appropriate for multi-objective decision problems in environmental applications. In addition, interactive methods are generally preferred by decision makers as they can be directly involved in the various steps of the decision process. Some results deriving from the application of the proposed procedure are presented. The application of the procedure is exemplified by considering the interaction with two different decision makers who are assumed to be in charge of planning the MSW system in the municipality of Genova (Italy).
Regret and the rationality of choices.
Bourgeois-Gironde, Sacha
2010-01-27
Regret helps to optimize decision behaviour. It can be defined as a rational emotion. Several recent neurobiological studies have confirmed the interface between emotion and cognition at which regret is located and documented its role in decision behaviour. These data give credibility to the incorporation of regret in decision theory that had been proposed by economists in the 1980s. However, finer distinctions are required in order to get a better grasp of how regret and behaviour influence each other. Regret can be defined as a predictive error signal but this signal does not necessarily transpose into a decision-weight influencing behaviour. Clinical studies on several types of patients show that the processing of an error signal and its influence on subsequent behaviour can be dissociated. We propose a general understanding of how regret and decision-making are connected in terms of regret being modulated by rational antecedents of choice. Regret and the modification of behaviour on its basis will depend on the criteria of rationality involved in decision-making. We indicate current and prospective lines of research in order to refine our views on how regret contributes to optimal decision-making.
Using price-volume agreements to manage pharmaceutical leakage and off-label promotion.
Zhang, Hui; Zaric, Gregory S
2015-09-01
Unapproved or "off-label" uses of prescription drugs are quite common. The extent of this use may be influenced by the promotional efforts of manufacturers. This paper investigates how a manufacturer makes promotional decisions in the presence of a price-volume agreement. We developed an optimization model in which the manufacturer maximizes its expected profit by choosing the level of marketing effort to promote uses for different indications. We considered several ways a volume threshold is determined. We also compared models in which off-label uses are reimbursed and those in which they are forbidden to illustrate the impact of off-label promotion on the optimal decisions and on the decision maker's performance. We found that the payer chooses a threshold which may be the same as the manufacturer's optimal decision. We also found that the manufacturer not only considers the promotional cost in promoting off-label uses but also considers the health benefit of off-label uses. In some situations, using a price-volume agreement to control leakage may be a better idea than simply preventing leakage without using the agreement, from a social welfare perspective.
Gill, R T; Thornton, S F; Harbottle, M J; Smith, J W N
2016-12-15
Sustainable management practices can be applied to the remediation of contaminated land to maximise the economic, environmental and social benefits of the process. The Sustainable Remediation Forum UK (SuRF-UK) have developed a framework to support the implementation of sustainable practices within contaminated land management and decision making. This study applies the framework, including qualitative (Tier 1) and semi-quantitative (Tier 2) sustainability assessments, to a complex site where the principal contaminant source is unleaded gasoline, giving rise to a dissolved phase BTEX and MTBE plume. The pathway is groundwater migration through a chalk aquifer and the receptor is a water supply borehole. A hydraulic containment system (HCS) has been installed to manage the MTBE plume migration. The options considered to remediate the MTBE source include monitored natural attenuation (MNA), air sparging/soil vapour extraction (AS/SVE), pump and treat (PT) and electrokinetic-enhanced bioremediation (EK-BIO). A sustainability indictor set from the SuRF-UK framework, including priority indicator categories selected during a stakeholder engagement workshop, was used to frame the assessments. At Tier 1 the options are ranked based on qualitative supporting information, whereas in Tier 2 a multi-criteria analysis is applied. Furthermore, the multi-criteria analysis was refined for scenarios where photovoltaics (PVs) are included and amendments are excluded from the EK-BIO option. Overall, the analysis identified AS/SVE and EK-BIO as more sustainable remediation options at this site than either PT or MNA. The wider implications of this study include: (1) an appraisal of the management decision from each Tier of the assessment with the aim to highlight areas for time and cost savings for similar assessments in the future; (2) the observation that EK-BIO performed well against key indicator categories compared to the other intensive treatments; and (3) introducing methods to improve the sustainability of the EK-BIO treatment design (such as PVs) did not have a significant effect in this instance. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Engineering tradeoff problems viewed as multiple objective optimizations and the VODCA methodology
NASA Astrophysics Data System (ADS)
Morgan, T. W.; Thurgood, R. L.
1984-05-01
This paper summarizes a rational model for making engineering tradeoff decisions. The model is a hybrid from the fields of social welfare economics, communications, and operations research. A solution methodology (Vector Optimization Decision Convergence Algorithm - VODCA) firmly grounded in the economic model is developed both conceptually and mathematically. The primary objective for developing the VODCA methodology was to improve the process for extracting relative value information about the objectives from the appropriate decision makers. This objective was accomplished by employing data filtering techniques to increase the consistency of the relative value information and decrease the amount of information required. VODCA is applied to a simplified hypothetical tradeoff decision problem. Possible use of multiple objective analysis concepts and the VODCA methodology in product-line development and market research are discussed.
Salt controls feeding decisions in a blood-sucking insect.
Pontes, Gina; Pereira, Marcos H; Barrozo, Romina B
2017-04-01
Salts are necessary for maintaining homeostatic conditions within the body of all living organisms. Like with all essential nutrients, deficient or excessive ingestion of salts can result in adverse health effects. The taste system is a primary sensory modality that helps animals to make adequate feeding decisions in terms of salt consumption. In this work we show that sodium and potassium chloride salts modulate the feeding behavior of Rhodnius prolixus in a concentration-dependent manner. Feeding is only triggered by an optimal concentration of any of these salts (0.1-0.15M) and in presence of the phagostimulant ATP. Conversely, feeding solutions that do not contain salts or have a high-salt concentration (>0.3M) are not ingested by insects. Notably, we show that feeding decisions of insects cannot be explained as an osmotic effect, because they still feed over hyperosmotic solutions bearing the optimal salt concentration. Insects perceive optimal-salt, no-salt and high-salt solutions as different gustatory information, as revealed the electromyogram recordings of the cibarial pump. Moreover, because insects do a continuous gustatory monitoring of the incoming food during feeding, sudden changes beyond the optimal sodium concentration decrease and even inhibit feeding. The administration of amiloride, a sodium channel blocker, noticeably reduces the ingestion of the optimal sodium solution but not of the optimal potassium solution. Salt detection seems to occur at least through two salt receptors, one amiloride-sensitive and another amiloride-insensitive. Our results confirm the importance of the gustatory system in R. prolixus, showing the relevant role that salts play on their feeding decisions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Capacity of noncoherent MFSK channels
NASA Technical Reports Server (NTRS)
Bar-David, I.; Butman, S. A.; Klass, M. J.; Levitt, B. K.; Lyon, R. F.
1974-01-01
Performance limits theoretically achievable over noncoherent channels perturbed by additive Gaussian noise in hard decision, optimal, and soft decision receivers are computed as functions of the number of orthogonal signals and the predetection signal-to-noise ratio. Equations are derived for orthogonal signal capacity, the ultimate MFSK capacity, and the convolutional coding and decoding limit. It is shown that performance improves as the signal-to-noise ratio increases, provided the bandwidth can be increased, that the optimum number of signals is not infinite (except for the optimal receiver), and that the optimum number decreases as the signal-to-noise ratio decreases, but is never less than 7 for even the hard decision receiver.
An optimal brain can be composed of conflicting agents
Livnat, Adi; Pippenger, Nicholas
2006-01-01
Many behaviors have been attributed to internal conflict within the animal and human mind. However, internal conflict has not been reconciled with evolutionary principles, in that it appears maladaptive relative to a seamless decision-making process. We study this problem through a mathematical analysis of decision-making structures. We find that, under natural physiological limitations, an optimal decision-making system can involve “selfish” agents that are in conflict with one another, even though the system is designed for a single purpose. It follows that conflict can emerge within a collective even when natural selection acts on the level of the collective only. PMID:16492775
Goal-Directed Decision Making with Spiking Neurons.
Friedrich, Johannes; Lengyel, Máté
2016-02-03
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. Copyright © 2016 the authors 0270-6474/16/361529-18$15.00/0.
Goal-Directed Decision Making with Spiking Neurons
Lengyel, Máté
2016-01-01
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. PMID:26843636
Simulation-optimization model for production planning in the blood supply chain.
Osorio, Andres F; Brailsford, Sally C; Smith, Honora K; Forero-Matiz, Sonia P; Camacho-Rodríguez, Bernardo A
2017-12-01
Production planning in the blood supply chain is a challenging task. Many complex factors such as uncertain supply and demand, blood group proportions, shelf life constraints and different collection and production methods have to be taken into account, and thus advanced methodologies are required for decision making. This paper presents an integrated simulation-optimization model to support both strategic and operational decisions in production planning. Discrete-event simulation is used to represent the flows through the supply chain, incorporating collection, production, storing and distribution. On the other hand, an integer linear optimization model running over a rolling planning horizon is used to support daily decisions, such as the required number of donors, collection methods and production planning. This approach is evaluated using real data from a blood center in Colombia. The results show that, using the proposed model, key indicators such as shortages, outdated units, donors required and cost are improved.
Online gaming for learning optimal team strategies in real time
NASA Astrophysics Data System (ADS)
Hudas, Gregory; Lewis, F. L.; Vamvoudakis, K. G.
2010-04-01
This paper first presents an overall view for dynamical decision-making in teams, both cooperative and competitive. Strategies for team decision problems, including optimal control, zero-sum 2-player games (H-infinity control) and so on are normally solved for off-line by solving associated matrix equations such as the Riccati equation. However, using that approach, players cannot change their objectives online in real time without calling for a completely new off-line solution for the new strategies. Therefore, in this paper we give a method for learning optimal team strategies online in real time as team dynamical play unfolds. In the linear quadratic regulator case, for instance, the method learns the Riccati equation solution online without ever solving the Riccati equation. This allows for truly dynamical team decisions where objective functions can change in real time and the system dynamics can be time-varying.
Systematic design for trait introgression projects.
Cameron, John N; Han, Ye; Wang, Lizhi; Beavis, William D
2017-10-01
Using an Operations Research approach, we demonstrate design of optimal trait introgression projects with respect to competing objectives. We demonstrate an innovative approach for designing Trait Introgression (TI) projects based on optimization principles from Operations Research. If the designs of TI projects are based on clear and measurable objectives, they can be translated into mathematical models with decision variables and constraints that can be translated into Pareto optimality plots associated with any arbitrary selection strategy. The Pareto plots can be used to make rational decisions concerning the trade-offs between maximizing the probability of success while minimizing costs and time. The systematic rigor associated with a cost, time and probability of success (CTP) framework is well suited to designing TI projects that require dynamic decision making. The CTP framework also revealed that previously identified 'best' strategies can be improved to be at least twice as effective without increasing time or expenses.
Tang, Liyang
2013-04-04
The main aim of China's Health Care System Reform was to help the decision maker find the optimal solution to China's institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China's health care system, and it could efficiently collect the data for determining the optimal solution to China's institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts' views into various optimal solutions to this problem under the support of this pilot system. After the general framework of China's institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. The market-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the doctors' point of view; the traditional government's regulation-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the pharmacists' point of view, the hospital administrators' point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China's institutional problem of health care provider selection from the nurses' point of view, the point of view of officials in medical insurance agencies, and the health care researchers' point of view. The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection.
Optimal control of raw timber production processes
Ivan Kolenka
1978-01-01
This paper demonstrates the possibility of optimal planning and control of timber harvesting activ-ities with mathematical optimization models. The separate phases of timber harvesting are represented by coordinated models which can be used to select the optimal decision for the execution of any given phase. The models form a system whose components are connected and...
Olugbara, Oludayo
2014-01-01
This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms—being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem. PMID:24883369
Optimization applications in aircraft engine design and test
NASA Technical Reports Server (NTRS)
Pratt, T. K.
1984-01-01
Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.
Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah
2014-01-01
This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.
Volume sharing of reservoir water
NASA Astrophysics Data System (ADS)
Dudley, Norman J.
1988-05-01
Previous models optimize short-, intermediate-, and long-run irrigation decision making in a simplified river valley system characterized by highly variable water supplies and demands for a single decision maker controlling both reservoir releases and farm water use. A major problem in relaxing the assumption of one decision maker is communicating the stochastic nature of supplies and demands between reservoir and farm managers. In this paper, an optimizing model is used to develop release rules for reservoir management when all users share equally in releases, and computer simulation is used to generate an historical time sequence of announced releases. These announced releases become a state variable in a farm management model which optimizes farm area-to-irrigate decisions through time. Such modeling envisages the use of growing area climatic data by the reservoir authority to gauge water demand and the transfer of water supply data from reservoir to farm managers via computer data files. Alternative model forms, including allocating water on a priority basis, are discussed briefly. Results show lower mean aggregate farm income and lower variance of aggregate farm income than in the single decision-maker case. This short-run economic efficiency loss coupled with likely long-run economic efficiency losses due to the attenuated nature of property rights indicates the need for quite different ways of integrating reservoir and farm management.
Rational decision-making in inhibitory control.
Shenoy, Pradeep; Yu, Angela J
2011-01-01
An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability.
Rational Decision-Making in Inhibitory Control
Shenoy, Pradeep; Yu, Angela J.
2011-01-01
An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability. PMID:21647306
Intelligent reservoir operation system based on evolving artificial neural networks
NASA Astrophysics Data System (ADS)
Chaves, Paulo; Chang, Fi-John
2008-06-01
We propose a novel intelligent reservoir operation system based on an evolving artificial neural network (ANN). Evolving means the parameters of the ANN model are identified by the GA evolutionary optimization technique. Accordingly, the ANN model should represent the operational strategies of reservoir operation. The main advantages of the Evolving ANN Intelligent System (ENNIS) are as follows: (i) only a small number of parameters to be optimized even for long optimization horizons, (ii) easy to handle multiple decision variables, and (iii) the straightforward combination of the operation model with other prediction models. The developed intelligent system was applied to the operation of the Shihmen Reservoir in North Taiwan, to investigate its applicability and practicability. The proposed method is first built to a simple formulation for the operation of the Shihmen Reservoir, with single objective and single decision. Its results were compared to those obtained by dynamic programming. The constructed network proved to be a good operational strategy. The method was then built and applied to the reservoir with multiple (five) decision variables. The results demonstrated that the developed evolving neural networks improved the operation performance of the reservoir when compared to its current operational strategy. The system was capable of successfully simultaneously handling various decision variables and provided reasonable and suitable decisions.
ERIC Educational Resources Information Center
Baldwin, Grover H.
The use of quantitative decision making tools provides the decision maker with a range of alternatives among which to decide, permits acceptance and use of the optimal solution, and decreases risk. Training line administrators in the use of these tools can help school business officials obtain reliable information upon which to base district…
Bayesian Decision Theoretical Framework for Clustering
ERIC Educational Resources Information Center
Chen, Mo
2011-01-01
In this thesis, we establish a novel probabilistic framework for the data clustering problem from the perspective of Bayesian decision theory. The Bayesian decision theory view justifies the important questions: what is a cluster and what a clustering algorithm should optimize. We prove that the spectral clustering (to be specific, the…
Beyond Decision Making: Cultural Ideology as Heuristic Paradigmatic Models.
ERIC Educational Resources Information Center
Whitley, L. Darrell
A paradigmatic model of cultural ideology provides a context for understanding the relationship between decision-making and personal and cultural rationality. Cultural rules or heuristics exist which indicate that many decisions can be made on the basis of established strategy rather than continual analytical calculations. When an optimal solution…
Weather Avoidance Using Route Optimization as a Decision Aid: An AWIN Topical Study. Phase 1
NASA Technical Reports Server (NTRS)
1998-01-01
The aviation community is faced with reducing the fatal aircraft accident rate by 80 percent within 10 years. This must be achieved even with ever increasing, traffic and a changing National Airspace System. This is not just an altruistic goal, but a real necessity, if our growing level of commerce is to continue. Honeywell Technology Center's topical study, "Weather Avoidance Using Route Optimization as a Decision Aid", addresses these pressing needs. The goal of this program is to use route optimization and user interface technologies to develop a prototype decision aid for dispatchers and pilots. This decision aid will suggest possible diversions through single or multiple weather hazards and present weather information with a human-centered design. At the conclusion of the program, we will have a laptop prototype decision aid that will be used to demonstrate concepts to industry for integration into commercialized products for dispatchers and/or pilots. With weather a factor in 30% of aircraft accidents, our program will prevent accidents by strategically avoiding weather hazards in flight. By supplying more relevant weather information in a human-centered format along with the tools to generate flight plans around weather, aircraft exposure to weather hazards can be reduced. Our program directly addresses the NASA's five year investment areas of Strategic Weather Information and Weather Operations (simulation/hazard characterization and crew/dispatch/ATChazard monitoring, display, and decision support) (NASA Aeronautics Safety Investment Strategy: Weather Investment Recommendations, April 15, 1997). This program is comprised of two phases, Phase I concluded December 31, 1998. This first phase defined weather data requirements, lateral routing algorithms, an conceptual displays for a user-centered design. Phase II runs from January 1999 through September 1999. The second phase integrates vertical routing into the lateral optimizer and combines the user interface into a prototype software testbed. Phase II concludes with a dispatcher and pilot evaluation of the route optimizer decision aid. This document describes work completed in Phase I in contract with NASA Langley August 1998 - December 1998. This report includes: (1) Discuss how weather hazards were identified in partnership with experts, and how weather hazards were prioritized; (2) Static representations of display layouts for integrated planning function (3) Cost function for the 2D route optimizer; (4) Discussion of the method for obtaining, access to raw data of, and the results of the flight deck user information requirements definition; (5) Itemized display format requirements identified for representing weather hazards in a route planning aid.
Game theory and risk-based leveed river system planning with noncooperation
NASA Astrophysics Data System (ADS)
Hui, Rui; Lund, Jay R.; Madani, Kaveh
2016-01-01
Optimal risk-based levee designs are usually developed for economic efficiency. However, in river systems with multiple levees, the planning and maintenance of different levees are controlled by different agencies or groups. For example, along many rivers, levees on opposite riverbanks constitute a simple leveed river system with each levee designed and controlled separately. Collaborative planning of the two levees can be economically optimal for the whole system. Independent and self-interested landholders on opposite riversides often are willing to separately determine their individual optimal levee plans, resulting in a less efficient leveed river system from an overall society-wide perspective (the tragedy of commons). We apply game theory to simple leveed river system planning where landholders on each riverside independently determine their optimal risk-based levee plans. Outcomes from noncooperative games are analyzed and compared with the overall economically optimal outcome, which minimizes net flood cost system-wide. The system-wide economically optimal solution generally transfers residual flood risk to the lower-valued side of the river, but is often impractical without compensating for flood risk transfer to improve outcomes for all individuals involved. Such compensation can be determined and implemented with landholders' agreements on collaboration to develop an economically optimal plan. By examining iterative multiple-shot noncooperative games with reversible and irreversible decisions, the costs of myopia for the future in making levee planning decisions show the significance of considering the externalities and evolution path of dynamic water resource problems to improve decision-making.
The Dilution Effect and Information Integration in Perceptual Decision Making
Hotaling, Jared M.; Cohen, Andrew L.; Shiffrin, Richard M.; Busemeyer, Jerome R.
2015-01-01
In cognitive science there is a seeming paradox: On the one hand, studies of human judgment and decision making have repeatedly shown that people systematically violate optimal behavior when integrating information from multiple sources. On the other hand, optimal models, often Bayesian, have been successful at accounting for information integration in fields such as categorization, memory, and perception. This apparent conflict could be due, in part, to different materials and designs that lead to differences in the nature of processing. Stimuli that require controlled integration of information, such as the quantitative or linguistic information (commonly found in judgment studies), may lead to suboptimal performance. In contrast, perceptual stimuli may lend themselves to automatic processing, resulting in integration that is closer to optimal. We tested this hypothesis with an experiment in which participants categorized faces based on resemblance to a family patriarch. The amount of evidence contained in the top and bottom halves of each test face was independently manipulated. These data allow us to investigate a canonical example of sub-optimal information integration from the judgment and decision making literature, the dilution effect. Splitting the top and bottom halves of a face, a manipulation meant to encourage controlled integration of information, produced farther from optimal behavior and larger dilution effects. The Multi-component Information Accumulation model, a hybrid optimal/averaging model of information integration, successfully accounts for key accuracy, response time, and dilution effects. PMID:26406323
The Dilution Effect and Information Integration in Perceptual Decision Making.
Hotaling, Jared M; Cohen, Andrew L; Shiffrin, Richard M; Busemeyer, Jerome R
2015-01-01
In cognitive science there is a seeming paradox: On the one hand, studies of human judgment and decision making have repeatedly shown that people systematically violate optimal behavior when integrating information from multiple sources. On the other hand, optimal models, often Bayesian, have been successful at accounting for information integration in fields such as categorization, memory, and perception. This apparent conflict could be due, in part, to different materials and designs that lead to differences in the nature of processing. Stimuli that require controlled integration of information, such as the quantitative or linguistic information (commonly found in judgment studies), may lead to suboptimal performance. In contrast, perceptual stimuli may lend themselves to automatic processing, resulting in integration that is closer to optimal. We tested this hypothesis with an experiment in which participants categorized faces based on resemblance to a family patriarch. The amount of evidence contained in the top and bottom halves of each test face was independently manipulated. These data allow us to investigate a canonical example of sub-optimal information integration from the judgment and decision making literature, the dilution effect. Splitting the top and bottom halves of a face, a manipulation meant to encourage controlled integration of information, produced farther from optimal behavior and larger dilution effects. The Multi-component Information Accumulation model, a hybrid optimal/averaging model of information integration, successfully accounts for key accuracy, response time, and dilution effects.
Optimal Decision Making in a Class of Uncertain Systems Based on Uncertain Variables
NASA Astrophysics Data System (ADS)
Bubnicki, Z.
2006-06-01
The paper is concerned with a class of uncertain systems described by relational knowledge representations with unknown parameters which are assumed to be values of uncertain variables characterized by a user in the form of certainty distributions. The first part presents the basic optimization problem consisting in finding the decision maximizing the certainty index that the requirement given by a user is satisfied. The main part is devoted to the description of the optimization problem with the given certainty threshold. It is shown how the approach presented in the paper may be applied to some problems for anticipatory systems.
Munson, Mark; Lieberman, Harvey; Tserlin, Elina; Rocnik, Jennifer; Ge, Jie; Fitzgerald, Maria; Patel, Vinod; Garcia-Echeverria, Carlos
2015-08-01
Herein, we report a novel and general method, lead optimization attrition analysis (LOAA), to benchmark two distinct small-molecule lead series using a relatively unbiased, simple technique and commercially available software. We illustrate this approach with data collected during lead optimization of two independent oncology programs as a case study. Easily generated graphics and attrition curves enabled us to calibrate progress and support go/no go decisions on each program. We believe that this data-driven technique could be used broadly by medicinal chemists and management to guide strategic decisions during drug discovery. Copyright © 2015 Elsevier Ltd. All rights reserved.
Larsen, Nadja; Moslehi-Jenabian, Saloomeh; Werner, Birgit Brøsted; Jensen, Maiken Lund; Garrigues, Christel; Vogensen, Finn Kvist; Jespersen, Lene
2016-06-02
Performance of Lactococcus lactis as a starter culture in dairy fermentations depends on the levels of dissolved oxygen and the redox state of milk. In this study the microarray analysis was used to investigate the global gene expression of L. lactis subsp. lactis DSM20481(T) during milk acidification as affected by oxygen depletion and the decrease of redox potential. Fermentations were carried out at different initial levels of dissolved oxygen (dO2) obtained by milk sparging with oxygen (high dO2, 63%) or nitrogen (low dO2, 6%). Bacterial exposure to high initial oxygen resulted in overexpression of genes involved in detoxification of reactive oxygen species (ROS), oxidation-reduction processes, biosynthesis of trehalose and down-regulation of genes involved in purine nucleotide biosynthesis, indicating that several factors, among them trehalose and GTP, were implicated in bacterial adaptation to oxidative stress. Generally, transcriptional changes were more pronounced during fermentation of oxygen sparged milk. Genes up-regulated in response to oxygen depletion were implicated in biosynthesis and transport of pyrimidine nucleotides, branched chain amino acids and in arginine catabolic pathways; whereas genes involved in salvage of nucleotides and cysteine pathways were repressed. Expression pattern of genes involved in pyruvate metabolism indicated shifts towards mixed acid fermentation after oxygen depletion with production of specific end-products, depending on milk treatment. Differential expression of genes, involved in amino acid and pyruvate pathways, suggested that initial oxygen might influence the release of flavor compounds and, thereby, flavor development in dairy fermentations. The knowledge of molecular responses involved in adaptation of L. lactis to the shifts of redox state and pH during milk fermentations is important for the dairy industry to ensure better control of cheese production. Copyright © 2016 Elsevier B.V. All rights reserved.
Sen, Indra S; Peucker-Ehrenbrink, Bernhard
2014-03-18
The (187)Os/(188)Os ratio that is based on the β(-)-decay of (187)Re to (187)Os (t1/2 = 41.6 billion years) is widely used to investigate petroleum system processes. Despite its broad applicability to studies of hydrocarbon deposits worldwide, a suitable matrix-matched reference material for Os analysis does not exist. In this study, a method that enables Os isotope measurement of crude oil with in-line Os separation and purification from the sample matrix is proposed. The method to analyze Os concentration and (187)Os/(187)Os involves sample digestion under high pressure and high temperature using a high pressure asher (HPA-S, Anton Paar), sparging of volatile osmium tetroxide from the sample solution, and measurements using multicollector inductively coupled plasma mass spectrometry (MC-ICPMS). This methods significantly reduced the total procedural time compared to conventional Carius tube digestion followed by Os separation and purification using solvent extraction, microdistillation and N-TIMS analysis. The method yields Os concentration (28 ± 4 pg g(-1)) and (187)Os/(188)Os (1.62 ± 0.15) of commercially available crude oil reference material NIST 8505 (1 S.D., n = 6). The reference material NIST 8505 is homogeneous with respect to Os concentration at a test portion size of 0.2 g. Therefore, (187)Os/(188)Os composition and Os concentration of NIST 8505 can serve as a matrix-matched reference material for Os analysis. Data quality was assessed by repeated measurements of the USGS shale reference material SCo-1 (sample matrix similar to petroleum source rock) and the widely used Liquid Os Standard solution (LOsSt). The within-laboratory reproducibility of (187)Os/(188)Os for a 5 pg of LOsSt solution, analyzed with this method over a period of 12 months was ∼1.4% (1 S.D., n = 26), respectively.
Sensing the flux of volatile chemicals through the air-water interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackay, D.; Schroeder, W.H.; Ooijen, H. von
1997-12-31
There are several situations in which there is a need to assess the direction and magnitude of the flux across the air-water interface. Contaminants may be evaporating or absorbing in wastewater treatment systems in natural lake, river, estuarine and marine systems, and any attempt to compile a mass balance must include this process. In this study the authors review the theory underlying air-water exchange, then describe and discuss a sparging approach by which the direction and magnitude of the flux can be ascertained. The principle of the method is that a known flow rate of air is bubbled through themore » sparger and allowed to equilibrate with the water. The gas exiting the water surface is passed through a sorbent trap and later analyzed. The concentration, and hence the fugacity, of the contaminant in the sparged air can be deduced. In parallel, a similar flow of air from the atmosphere above the water is drawn through another sparger at a similar flow rate for a similar time and the trapped chemical analyzed giving the concentration and fugacity in the air. These data show the direction of air-water exchange (i.e. from high to low fugacity) and with information on the mass transfer coefficients and area, the flux. Successful tests were conducted of the system in a laboratory tank, in Lake Ontario and in Hamilton Harbour. Analyses of the traps showed a large number of peaks on the chromatogram many of which are believed to be of petroleum origin from fuels and vessel exhaust. The system will perform best under conditions where concentrations of specific contaminants are large, as occurs in waste water treatment systems. The approach has the potential to contribute to more accurate assessment of air-water fluxes. It avoids the problems of different analytical methodologies and the effect of sorption in the water column.« less
Bobade, Veena; Baudez, Jean Christophe; Evans, Geoffery; Eshtiaghi, Nicky
2017-05-01
Gas injection is known to play a major role on the particle size of the sludge, the oxygen transfer rate, as well as the mixing efficiency of membrane bioreactors and aeration basins in the waste water treatment plants. The rheological characteristics of sludge are closely related to the particle size of the sludge floc. However, particle size of sludge floc depends partly on the shear induced in the sludge and partly on physico-chemical nature of the sludge. The objective of this work is to determine the impact of gas injection on both the apparent viscosity and viscoelastic property of sludge. The apparent viscosity of sludge was investigated by two methods: in-situ and after sparging. Viscosity curves obtained by in-situ measurement showed that the apparent viscosity decreases significantly from 4000 Pa s to 10 Pa s at low shear rate range (below 10 s -1 ) with an increase in gas flow rate (0.5LPM to 3LPM); however the after sparging flow curve analysis showed that the reduction in apparent viscosity throughout the shear rate range is negligible to be displayed. Torque and displacement data at low shear rate range revealed that the obtained lower apparent viscosity in the in-situ method is not the material characteristics, but the slippage effect due to a preferred location of the bubbles close to the bob, causing an inconsistent decrease of torque and increase of displacement at low shear rate range. In linear viscoelastic regime, the elastic and viscous modulus of sludge was reduced by 33% & 25%, respectively, due to gas injection because of induced shear. The amount of induced shear measured through two different tests (creep and time sweep) were the same. The impact of this induced shear on sludge structure was also verified by microscopic images. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mendoza, J A; Prado, O J; Veiga, M C; Kennes, C
2004-01-01
The hydrodynamic behaviour of a biofilter fed toluene and packed with an inert carrier was evaluated on start-up and after long-term operation, using both methane and styrene as tracers in Residence Time Distribution experiments. Results indicated some deviation from ideal plug flow behaviour after 2-year operation. It was also observed that the retention time of VOCs gradually increased with time and was significantly longer than the average residence time of the bulk gas phase. Non-ideal hydrodynamic behaviour in packed beds may be due to excess biomass accumulation and affects both reactor modeling and performance. Therefore, several methods were studied for the removal of biomass after long-term biofilter operation: filling with water and draining, backwashing, and air sparging. Several flow rates and temperatures (20-60 degrees C) were applied using either water or different chemicals (NaOH, NaOCl, HTAB) in aqueous solution. Usually, higher flow rates and higher temperatures allowed the removal of more biomass, but the efficiency of biomass removal was highly dependent on the pressure drop reached before the treatment. The filling/draining method was the least efficient for biomass removal, although the treatment did basically not generate any biological inhibition. The efficiency of backwashing and air sparging was relatively similar and was more effective when adding chemicals. However, treatments with chemicals resulted in a significant decrease of the biofilter's performance immediately after applying the treatment, needing periods of several days to recover the original performance. The effect of manually mixing the packing material was also evaluated in duplicate experiments. Quite large amounts of biomass were removed but disruption of the filter bed was observed. Batch assays were performed simultaneously in order to support and quantify the observed inhibitory effects of the different chemicals and temperatures used during the treatments.
Clery, Stephane; Cumming, Bruce G.
2017-01-01
Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal “noise” correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. SIGNIFICANCE STATEMENT Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple predictions to differentiate decoding schemes, and found support for optimal linear readout of early sensory populations with information-limiting correlations. Here, we observed decision-related activity for neurons in visual area V2 of macaques performing fine disparity discrimination, as yet the earliest site for this task. These findings, and previously reported results from V2 in a different task, deviated from the predictions for optimal linear readout of a population with information-limiting correlations. Our results suggest that optimal linear decoding of early sensory information is not a general decoding strategy used by the brain. PMID:28100751
Pavement maintenance optimization model using Markov Decision Processes
NASA Astrophysics Data System (ADS)
Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.
2017-09-01
This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.
2015-06-01
Hadoop Distributed File System (HDFS) without any integration with Accumulo-based Knowledge Stores based on OWL/RDF. 4. Cloud Based The Apache Software...BTW, 7(12), pp. 227–241. Godin, A. & Akins, D. (2014). Extending DCGS-N naval tactical clouds from in-storage to in-memory for the integrated fires...VISUALIZATIONS: A TOOL TO ACHIEVE OPTIMIZED OPERATIONAL DECISION MAKING AND DATA INTEGRATION by Paul C. Hudson Jeffrey A. Rzasa June 2015 Thesis
Decision Variants for the Automatic Determination of Optimal Feature Subset in RF-RFE.
Chen, Qi; Meng, Zhaopeng; Liu, Xinyi; Jin, Qianguo; Su, Ran
2018-06-15
Feature selection, which identifies a set of most informative features from the original feature space, has been widely used to simplify the predictor. Recursive feature elimination (RFE), as one of the most popular feature selection approaches, is effective in data dimension reduction and efficiency increase. A ranking of features, as well as candidate subsets with the corresponding accuracy, is produced through RFE. The subset with highest accuracy (HA) or a preset number of features (PreNum) are often used as the final subset. However, this may lead to a large number of features being selected, or if there is no prior knowledge about this preset number, it is often ambiguous and subjective regarding final subset selection. A proper decision variant is in high demand to automatically determine the optimal subset. In this study, we conduct pioneering work to explore the decision variant after obtaining a list of candidate subsets from RFE. We provide a detailed analysis and comparison of several decision variants to automatically select the optimal feature subset. Random forest (RF)-recursive feature elimination (RF-RFE) algorithm and a voting strategy are introduced. We validated the variants on two totally different molecular biology datasets, one for a toxicogenomic study and the other one for protein sequence analysis. The study provides an automated way to determine the optimal feature subset when using RF-RFE.
A Decision-making Model for a Two-stage Production-delivery System in SCM Environment
NASA Astrophysics Data System (ADS)
Feng, Ding-Zhong; Yamashiro, Mitsuo
A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.
Direct adaptive performance optimization of subsonic transports: A periodic perturbation technique
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn
1995-01-01
Aircraft performance can be optimized at the flight condition by using available redundancy among actuators. Effective use of this potential allows improved performance beyond limits imposed by design compromises. Optimization based on nominal models does not result in the best performance of the actual aircraft at the actual flight condition. An adaptive algorithm for optimizing performance parameters, such as speed or fuel flow, in flight based exclusively on flight data is proposed. The algorithm is inherently insensitive to model inaccuracies and measurement noise and biases and can optimize several decision variables at the same time. An adaptive constraint controller integrated into the algorithm regulates the optimization constraints, such as altitude or speed, without requiring and prior knowledge of the autopilot design. The algorithm has a modular structure which allows easy incorporation (or removal) of optimization constraints or decision variables to the optimization problem. An important part of the contribution is the development of analytical tools enabling convergence analysis of the algorithm and the establishment of simple design rules. The fuel-flow minimization and velocity maximization modes of the algorithm are demonstrated on the NASA Dryden B-720 nonlinear flight simulator for the single- and multi-effector optimization cases.
NASA Astrophysics Data System (ADS)
Pierce, S. A.; Ciarleglio, M.; Dulay, M.; Lowry, T. S.; Sharp, J. M.; Barnes, J. W.; Eaton, D. J.; Tidwell, V. C.
2006-12-01
Work in the literature for groundwater allocation emphasizes finding a truly optimal solution, often with the drawback of limiting the reported results to either maximizing net benefit in regional scale models or minimizing pumping costs for localized cases. From a policy perspective, limited insight can be gained from these studies because the results are restricted to a single, efficient solution and they neglect non-market values that may influence a management decision. Conversely, economically derived objective functions tend to exhibit a plateau upon nearing the optimal value. This plateau effect, or non-uniqueness, is actually a positive feature in the behavior of groundwater systems because it demonstrates that multiple management strategies, serving numerous community preferences, may be considered while still achieving similar quantitative results. An optimization problem takes the same set of initial conditions and looks for the most efficient solution while a decision problem looks at a situation and asks for a solution that meets certain user-defined criteria. In other words, the election of an alternative course of action using a decision support system will not always result in selection of the most `optimized' alternative. To broaden the analytical toolset available for science and policy interaction, we have developed a groundwater decision support system (GWDSS) that generates a suite of management alternatives by pairing a combinatorial search algorithm with a numerical groundwater model for consideration by decision makers and stakeholders. Subject to constraints as defined by community concerns, the tabu optimization engine systematically creates hypothetical management scenarios running hundreds, and even thousands, of simulations, and then saving the best performing realizations. Results of the search are then evaluated against stakeholder preference sets using ranking methods to aid in identifying a subset of alternatives for final consideration. Here we present the development of the GWDSS and its use in the decision making process for the Barton Springs segment of the Edwards Aquifer located in Austin Texas. Using hydrogeologic metrics, together with economic estimates and impervious cover valuations, representative rankings are determined. Post search multi-objective analysis reveals that some highly ranked alternatives meet the preference sets of more than one stakeholder and achieve similar quantitative aquifer performance. These results are important to both modelers and policy makers alike.
Li, Shuangyan; Li, Xialian; Zhang, Dezhi; Zhou, Lingyun
2017-01-01
This study develops an optimization model to integrate facility location and inventory control for a three-level distribution network consisting of a supplier, multiple distribution centers (DCs), and multiple retailers. The integrated model addressed in this study simultaneously determines three types of decisions: (1) facility location (optimal number, location, and size of DCs); (2) allocation (assignment of suppliers to located DCs and retailers to located DCs, and corresponding optimal transport mode choices); and (3) inventory control decisions on order quantities, reorder points, and amount of safety stock at each retailer and opened DC. A mixed-integer programming model is presented, which considers the carbon emission taxes, multiple transport modes, stochastic demand, and replenishment lead time. The goal is to minimize the total cost, which covers the fixed costs of logistics facilities, inventory, transportation, and CO2 emission tax charges. The aforementioned optimal model was solved using commercial software LINGO 11. A numerical example is provided to illustrate the applications of the proposed model. The findings show that carbon emission taxes can significantly affect the supply chain structure, inventory level, and carbon emission reduction levels. The delay rate directly affects the replenishment decision of a retailer. PMID:28103246
Optimizing the response to surveillance alerts in automated surveillance systems.
Izadi, Masoumeh; Buckeridge, David L
2011-02-28
Although much research effort has been directed toward refining algorithms for disease outbreak alerting, considerably less attention has been given to the response to alerts generated from statistical detection algorithms. Given the inherent inaccuracy in alerting, it is imperative to develop methods that help public health personnel identify optimal policies in response to alerts. This study evaluates the application of dynamic decision making models to the problem of responding to outbreak detection methods, using anthrax surveillance as an example. Adaptive optimization through approximate dynamic programming is used to generate a policy for decision making following outbreak detection. We investigate the degree to which the model can tolerate noise theoretically, in order to keep near optimal behavior. We also evaluate the policy from our model empirically and compare it with current approaches in routine public health practice for investigating alerts. Timeliness of outbreak confirmation and total costs associated with the decisions made are used as performance measures. Using our approach, on average, 80 per cent of outbreaks were confirmed prior to the fifth day of post-attack with considerably less cost compared to response strategies currently in use. Experimental results are also provided to illustrate the robustness of the adaptive optimization approach and to show the realization of the derived error bounds in practice. Copyright © 2011 John Wiley & Sons, Ltd.
Irwin, R John; Irwin, Timothy C
2011-06-01
Making clinical decisions on the basis of diagnostic tests is an essential feature of medical practice and the choice of the decision threshold is therefore crucial. A test's optimal diagnostic threshold is the threshold that maximizes expected utility. It is given by the product of the prior odds of a disease and a measure of the importance of the diagnostic test's sensitivity relative to its specificity. Choosing this threshold is the same as choosing the point on the Receiver Operating Characteristic (ROC) curve whose slope equals this product. We contend that a test's likelihood ratio is the canonical decision variable and contrast diagnostic thresholds based on likelihood ratio with two popular rules of thumb for choosing a threshold. The two rules are appealing because they have clear graphical interpretations, but they yield optimal thresholds only in special cases. The optimal rule can be given similar appeal by presenting indifference curves, each of which shows a set of equally good combinations of sensitivity and specificity. The indifference curve is tangent to the ROC curve at the optimal threshold. Whereas ROC curves show what is feasible, indifference curves show what is desirable. Together they show what should be chosen. Copyright © 2010 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
A decision modeling for phasor measurement unit location selection in smart grid systems
NASA Astrophysics Data System (ADS)
Lee, Seung Yup
As a key technology for enhancing the smart grid system, Phasor Measurement Unit (PMU) provides synchronized phasor measurements of voltages and currents of wide-area electric power grid. With various benefits from its application, one of the critical issues in utilizing PMUs is the optimal site selection of units. The main aim of this research is to develop a decision support system, which can be used in resource allocation task for smart grid system analysis. As an effort to suggest a robust decision model and standardize the decision modeling process, a harmonized modeling framework, which considers operational circumstances of component, is proposed in connection with a deterministic approach utilizing integer programming. With the results obtained from the optimal PMU placement problem, the advantages and potential that the harmonized modeling process possesses are assessed and discussed.
Monitoring and decision making by people in man machine systems
NASA Technical Reports Server (NTRS)
Johannsen, G.
1979-01-01
The analysis of human monitoring and decision making behavior as well as its modeling are described. Classic and optimal control theoretical, monitoring models are surveyed. The relationship between attention allocation and eye movements is discussed. As an example of applications, the evaluation of predictor displays by means of the optimal control model is explained. Fault detection involving continuous signals and decision making behavior of a human operator engaged in fault diagnosis during different operation and maintenance situations are illustrated. Computer aided decision making is considered as a queueing problem. It is shown to what extent computer aids can be based on the state of human activity as measured by psychophysiological quantities. Finally, management information systems for different application areas are mentioned. The possibilities of mathematical modeling of human behavior in complex man machine systems are also critically assessed.
Optimal data systems: the future of clinical predictions and decision support.
Celi, Leo A; Csete, Marie; Stone, David
2014-10-01
The purpose of the review is to describe the evolving concept and role of data as it relates to clinical predictions and decision-making. Critical care medicine is, as an especially data-rich specialty, becoming acutely cognizant not only of its historic deficits in data utilization but also of its enormous potential for capturing, mining, and leveraging such data into well-designed decision support modalities as well as the formulation of robust best practices. Modern electronic medical records create an opportunity to design complete and functional data systems that can support clinical care to a degree never seen before. Such systems are often referred to as 'data-driven,' but a better term is 'optimal data systems' (ODS). Here we discuss basic features of an ODS and its benefits, including the potential to transform clinical prediction and decision support.
Decision Making and Reward in Frontal Cortex
Kennerley, Steven W.; Walton, Mark E.
2011-01-01
Patients with damage to the prefrontal cortex (PFC)—especially the ventral and medial parts of PFC—often show a marked inability to make choices that meet their needs and goals. These decision-making impairments often reflect both a deficit in learning concerning the consequences of a choice, as well as deficits in the ability to adapt future choices based on experienced value of the current choice. Thus, areas of PFC must support some value computations that are necessary for optimal choice. However, recent frameworks of decision making have highlighted that optimal and adaptive decision making does not simply rest on a single computation, but a number of different value computations may be necessary. Using this framework as a guide, we summarize evidence from both lesion studies and single-neuron physiology for the representation of different value computations across PFC areas. PMID:21534649
Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu
2015-01-01
Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones. PMID:26251908
Gagnon, Marie-Pierre; Légaré, France; Fortin, Jean-Paul; Lamothe, Lise; Labrecque, Michel; Duplantie, Julie
2008-01-01
Background E-health is increasingly valued for supporting: 1) access to quality health care services for all citizens; 2) information flow and exchange; 3) integrated health care services and 4) interprofessional collaboration. Nevertheless, several questions remain on the factors allowing an optimal integration of e-health in health care policies, organisations and practices. An evidence-based integrated strategy would maximise the efficacy and efficiency of e-health implementation. However, decisions regarding e-health applications are usually not evidence-based, which can lead to a sub-optimal use of these technologies. This study aims at understanding factors influencing the application of scientific knowledge for an optimal implementation of e-health in the health care system. Methods A three-year multi-method study is being conducted in the Province of Quebec (Canada). Decision-making at each decisional level (political, organisational and clinical) are analysed based on specific approaches. At the political level, critical incidents analysis is being used. This method will identify how decisions regarding the implementation of e-health could be influenced or not by scientific knowledge. Then, interviews with key-decision-makers will look at how knowledge was actually used to support their decisions, and what factors influenced its use. At the organisational level, e-health projects are being analysed as case studies in order to explore the use of scientific knowledge to support decision-making during the implementation of the technology. Interviews with promoters, managers and clinicians will be carried out in order to identify factors influencing the production and application of scientific knowledge. At the clinical level, questionnaires are being distributed to clinicians involved in e-health projects in order to analyse factors influencing knowledge application in their decision-making. Finally, a triangulation of the results will be done using mixed methodologies to allow a transversal analysis of the results at each of the decisional levels. Results This study will identify factors influencing the use of scientific evidence and other types of knowledge by decision-makers involved in planning, financing, implementing and evaluating e-health projects. Conclusion These results will be highly relevant to inform decision-makers who wish to optimise the implementation of e-health in the Quebec health care system. This study is extremely relevant given the context of major transformations in the health care system where e-health becomes a must. PMID:18435853
Optimal and Nonoptimal Computer-Based Test Designs for Making Pass-Fail Decisions
ERIC Educational Resources Information Center
Hambleton, Ronald K.; Xing, Dehui
2006-01-01
Now that many credentialing exams are being routinely administered by computer, new computer-based test designs, along with item response theory models, are being aggressively researched to identify specific designs that can increase the decision consistency and accuracy of pass-fail decisions. The purpose of this study was to investigate the…
An Optimization Model for the Allocation of University Based Merit Aid
ERIC Educational Resources Information Center
Sugrue, Paul K.
2010-01-01
The allocation of merit-based financial aid during the college admissions process presents postsecondary institutions with complex and financially expensive decisions. This article describes the application of linear programming as a decision tool in merit based financial aid decisions at a medium size private university. The objective defined for…
Anderson, D.R.
1975-01-01
Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra-biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.
Decision-making under surprise and uncertainty: Arsenic contamination of water supplies
NASA Astrophysics Data System (ADS)
Randhir, Timothy O.; Mozumder, Pallab; Halim, Nafisa
2018-05-01
With ignorance and potential surprise dominating decision making in water resources, a framework for dealing with such uncertainty is a critical need in hydrology. We operationalize the 'potential surprise' criterion proposed by Shackle, Vickers, and Katzner (SVK) to derive decision rules to manage water resources under uncertainty and ignorance. We apply this framework to managing water supply systems in Bangladesh that face severe, naturally occurring arsenic contamination. The uncertainty involved with arsenic in water supplies makes the application of conventional analysis of decision-making ineffective. Given the uncertainty and surprise involved in such cases, we find that optimal decisions tend to favor actions that avoid irreversible outcomes instead of conventional cost-effective actions. We observe that a diversification of the water supply system also emerges as a robust strategy to avert unintended outcomes of water contamination. Shallow wells had a slight higher optimal level (36%) compare to deep wells and surface treatment which had allocation levels of roughly 32% under each. The approach can be applied in a variety of other cases that involve decision making under uncertainty and surprise, a frequent situation in natural resources management.
Pashaei, Elnaz; Ozen, Mustafa; Aydin, Nizamettin
2015-08-01
Improving accuracy of supervised classification algorithms in biomedical applications is one of active area of research. In this study, we improve the performance of Particle Swarm Optimization (PSO) combined with C4.5 decision tree (PSO+C4.5) classifier by applying Boosted C5.0 decision tree as the fitness function. To evaluate the effectiveness of our proposed method, it is implemented on 1 microarray dataset and 5 different medical data sets obtained from UCI machine learning databases. Moreover, the results of PSO + Boosted C5.0 implementation are compared to eight well-known benchmark classification methods (PSO+C4.5, support vector machine under the kernel of Radial Basis Function, Classification And Regression Tree (CART), C4.5 decision tree, C5.0 decision tree, Boosted C5.0 decision tree, Naive Bayes and Weighted K-Nearest neighbor). Repeated five-fold cross-validation method was used to justify the performance of classifiers. Experimental results show that our proposed method not only improve the performance of PSO+C4.5 but also obtains higher classification accuracy compared to the other classification methods.
Multicriteria Selection of Optimal Location of TCSC in a Competitive Energy Market
NASA Astrophysics Data System (ADS)
Alomoush, Muwaffaq I.
2010-05-01
The paper investigates selection of the best location of thyristor-controlled series compensator (TCSC) in a transmission system from many candidate locations in a competitive energy market such that the TCSC causes a net valuable impact on congestion management outcome, transmission utilization, transmission losses, voltage stability, degree of fulfillment of spot market contracts, and system security. The problem is treated as a multicriteria decision-making process such that the candidate locations of TCSC are the alternatives and the conflicting objectives are the outcomes of the dispatch process, which may have different importance weights. The paper proposes some performance indices that the dispatch decision-making entity can use to measure market dispatch outcomes of each alternative. Based on agreed-upon preferences, the measures presented may help the decision maker compare and rank dispatch scenarios to ultimately decide which location is the optimal one. To solve the multicriteria decision, we use the preference ranking organization method for enrichment evaluations (PROMETHEE), which is a multicriteria decision support method that can handle complex conflicting-objective decision-making processes.
Supply chain optimization for pediatric perioperative departments.
Davis, Janice L; Doyle, Robert
2011-09-01
Economic challenges compel pediatric perioperative departments to reduce nonlabor supply costs while maintaining the quality of patient care. Optimization of the supply chain introduces a framework for decision making that drives fiscally responsible decisions. The cost-effective supply chain is driven by implementing a value analysis process for product selection, being mindful of product sourcing decisions to reduce supply expense, creating logistical efficiency that will eliminate redundant processes, and managing inventory to ensure product availability. The value analysis approach is an analytical methodology for product selection that involves product evaluation and recommendation based on consideration of clinical benefit, overall financial impact, and revenue implications. Copyright © 2011 AORN, Inc. Published by Elsevier Inc. All rights reserved.
Ma, Wei Ji; Shen, Shan; Dziugaite, Gintare; van den Berg, Ronald
2015-01-01
In tasks such as visual search and change detection, a key question is how observers integrate noisy measurements from multiple locations to make a decision. Decision rules proposed to model this process haven fallen into two categories: Bayes-optimal (ideal observer) rules and ad-hoc rules. Among the latter, the maximum-of-outputs (max) rule has been most prominent. Reviewing recent work and performing new model comparisons across a range of paradigms, we find that in all cases except for one, the optimal rule describes human data as well as or better than every max rule either previously proposed or newly introduced here. This casts doubt on the utility of the max rule for understanding perceptual decision-making. PMID:25584425
Algorithms for optimizing the treatment of depression: making the right decision at the right time.
Adli, M; Rush, A J; Möller, H-J; Bauer, M
2003-11-01
Medication algorithms for the treatment of depression are designed to optimize both treatment implementation and the appropriateness of treatment strategies. Thus, they are essential tools for treating and avoiding refractory depression. Treatment algorithms are explicit treatment protocols that provide specific therapeutic pathways and decision-making tools at critical decision points throughout the treatment process. The present article provides an overview of major projects of algorithm research in the field of antidepressant therapy. The Berlin Algorithm Project and the Texas Medication Algorithm Project (TMAP) compare algorithm-guided treatments with treatment as usual. The Sequenced Treatment Alternatives to Relieve Depression Project (STAR*D) compares different treatment strategies in treatment-resistant patients.
Regret and the rationality of choices
Bourgeois-Gironde, Sacha
2010-01-01
Regret helps to optimize decision behaviour. It can be defined as a rational emotion. Several recent neurobiological studies have confirmed the interface between emotion and cognition at which regret is located and documented its role in decision behaviour. These data give credibility to the incorporation of regret in decision theory that had been proposed by economists in the 1980s. However, finer distinctions are required in order to get a better grasp of how regret and behaviour influence each other. Regret can be defined as a predictive error signal but this signal does not necessarily transpose into a decision-weight influencing behaviour. Clinical studies on several types of patients show that the processing of an error signal and its influence on subsequent behaviour can be dissociated. We propose a general understanding of how regret and decision-making are connected in terms of regret being modulated by rational antecedents of choice. Regret and the modification of behaviour on its basis will depend on the criteria of rationality involved in decision-making. We indicate current and prospective lines of research in order to refine our views on how regret contributes to optimal decision-making. PMID:20026463
Optimization techniques using MODFLOW-GWM
Grava, Anna; Feinstein, Daniel T.; Barlow, Paul M.; Bonomi, Tullia; Buarne, Fabiola; Dunning, Charles; Hunt, Randall J.
2015-01-01
An important application of optimization codes such as MODFLOW-GWM is to maximize water supply from unconfined aquifers subject to constraints involving surface-water depletion and drawdown. In optimizing pumping for a fish hatchery in a bedrock aquifer system overlain by glacial deposits in eastern Wisconsin, various features of the GWM-2000 code were used to overcome difficulties associated with: 1) Non-linear response matrices caused by unconfined conditions and head-dependent boundaries; 2) Efficient selection of candidate well and drawdown constraint locations; and 3) Optimizing against water-level constraints inside pumping wells. Features of GWM-2000 were harnessed to test the effects of systematically varying the decision variables and constraints on the optimized solution for managing withdrawals. An important lesson of the procedure, similar to lessons learned in model calibration, is that the optimized outcome is non-unique, and depends on a range of choices open to the user. The modeler must balance the complexity of the numerical flow model used to represent the groundwater-flow system against the range of options (decision variables, objective functions, constraints) available for optimizing the model.
NASA Astrophysics Data System (ADS)
Fox, Matthew D.
Advanced automotive technology assessment and powertrain design are increasingly performed through modeling, simulation, and optimization. But technology assessments usually target many competing criteria making any individual optimization challenging and arbitrary. Further, independent design simulations and optimizations take considerable time to execute, and design constraints and objectives change throughout the design process. Changes in design considerations usually require re-processing of simulations and more time. In this thesis, these challenges are confronted through CSU's participation in the EcoCAR2 hybrid vehicle design competition. The complexity of the competition's design objectives leveraged development of a decision support system tool to aid in multi-criteria decision making across technologies and to perform powertrain optimization. To make the decision support system interactive, and bypass the problem of long simulation times, a new approach was taken. The result of this research is CSU's architecture selection and component sizing, which optimizes a composite objective function representing the competition score. The selected architecture is an electric vehicle with an onboard range extending hydrogen fuel cell system. The vehicle has a 145kW traction motor, 18.9kWh of lithium ion battery, a 15kW fuel cell system, and 5kg of hydrogen storage capacity. Finally, a control strategy was developed that improves the vehicles performance throughout the driving range under variable driving conditions. In conclusion, the design process used in this research is reviewed and evaluated against other common design methodologies. I conclude, through the highlighted case studies, that the approach is more comprehensive than other popular design methodologies and is likely to lead to a higher quality product. The upfront modeling work and decision support system formulation will pay off in superior and timely knowledge transfer and more informed design decisions. The hypothesis is supported by the three case studies examined in this thesis.
Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.
2009-01-01
Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.
Ren, Jingzheng; Liang, Hanwei; Dong, Liang; Sun, Lu; Gao, Zhiqiu
2016-08-15
Industrial symbiosis provides novel and practical pathway to the design for the sustainability. Decision support tool for its verification is necessary for practitioners and policy makers, while to date, quantitative research is limited. The objective of this work is to present an innovative approach for supporting decision-making in the design for the sustainability with the implementation of industrial symbiosis in chemical complex. Through incorporating the emergy theory, the model is formulated as a multi-objective approach that can optimize both the economic benefit and sustainable performance of the integrated industrial system. A set of emergy based evaluation index are designed. Multi-objective Particle Swarm Algorithm is proposed to solve the model, and the decision-makers are allowed to choose the suitable solutions form the Pareto solutions. An illustrative case has been studied by the proposed method, a few of compromises between high profitability and high sustainability can be obtained for the decision-makers/stakeholders to make decision. Copyright © 2016 Elsevier B.V. All rights reserved.
A framework for sensitivity analysis of decision trees.
Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław
2018-01-01
In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.
Moon, Ji-Won; Phelps, Tommy J.; Fitzgerald Jr, Curtis L.; ...
2016-04-27
The thermophilic anaerobic metal-reducing bacterium Thermoanaerobacter sp. X513 efficiently produces zinc sulfide (ZnS) nanoparticles (NPs) in laboratory-scale ( ≤24-L) reactors. To determine whether this process can be up-scaled and adapted for pilot-plant production while maintaining NP yield and quality, a series of meso-scale experiments were performed using 100-l and 900-l reactors. Pasteurization and N 2-sparging replaced autoclaving and boiling for deoxygenating media in the transition from small-scale to pilot-plant reactors. Consecutive 100-L batches using new or recycled media produced ZnS NPs with highly reproducible ~2 nm average crystallite size (ACS) and yields of ~0.5g L -1, similar to small-scale batches.more » The 900-L pilot plant reactor produced ~ 320 g ZnS without process optimization or replacement of used medium; this quantity would be sufficient to form a ZnS thin film with ~120 nm thickness over 0.5 m width 13 km length. At all scales, the bacteria produced significant amounts of acetic, lactic and formic acids, which could be neutralized by the controlled addition of sodium hydroxide without the use of an organic pH buffer, eliminating 98% of the buffer chemical costs. In conclusion, the final NP products were characterized using XRD, ICP-OES, FTIR, DLS, and C/N analyses, which confirmed the growth medium without organic buffer enhanced the ZnS NP properties by reducing carbon and nitrogen surface coatings and supporting better dispersivity with similar ACS.« less
Moon, Ji-Won; Phelps, Tommy J; Fitzgerald, Curtis L; Lind, Randall F; Elkins, James G; Jang, Gyoung Gug; Joshi, Pooran C; Kidder, Michelle; Armstrong, Beth L; Watkins, Thomas R; Ivanov, Ilia N; Graham, David E
2016-09-01
The thermophilic anaerobic metal-reducing bacterium Thermoanaerobacter sp. X513 efficiently produces zinc sulfide (ZnS) nanoparticles (NPs) in laboratory-scale (≤ 24-L) reactors. To determine whether this process can be up-scaled and adapted for pilot-plant production while maintaining NP yield and quality, a series of pilot-plant scale experiments were performed using 100-L and 900-L reactors. Pasteurization and N2-sparging replaced autoclaving and boiling for deoxygenating media in the transition from small-scale to pilot plant reactors. Consecutive 100-L batches using new or recycled media produced ZnS NPs with highly reproducible ~2-nm average crystallite size (ACS) and yields of ~0.5 g L(-1), similar to the small-scale batches. The 900-L pilot plant reactor produced ~320 g ZnS without process optimization or replacement of used medium; this quantity would be sufficient to form a ZnS thin film with ~120 nm thickness over 0.5 m width × 13 km length. At all scales, the bacteria produced significant amounts of acetic, lactic, and formic acids, which could be neutralized by the controlled addition of sodium hydroxide without the use of an organic pH buffer, eliminating 98 % of the buffer chemical costs. The final NP products were characterized using XRD, ICP-OES, TEM, FTIR, PL, DLS, HPLC, and C/N analyses, which confirmed that the growth medium without organic buffer enhanced the ZnS NP properties by reducing carbon and nitrogen surface coatings and supporting better dispersivity with similar ACS.
1988-08-19
take place over the period of several days. Decisions regarding MOPP level or resource allocation made on day I may have no immediate impact, but a...present -- conditions, and manage a resource library to assist the DCA in making decisions under conditions of uncertainty. Several areas of utilization are...students work through a scenario, the device couid then display the consequences of those decisions or provide optimal decision recommendations
A model of interaction between anticorruption authority and corruption groups
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neverova, Elena G.; Malafeyef, Oleg A.
The paper provides a model of interaction between anticorruption unit and corruption groups. The main policy functions of the anticorruption unit involve reducing corrupt practices in some entities through an optimal approach to resource allocation and effective anticorruption policy. We develop a model based on Markov decision-making process and use Howard’s policy-improvement algorithm for solving an optimal decision strategy. We examine the assumption that corruption groups retaliate against the anticorruption authority to protect themselves. This model was implemented through stochastic game.
Wroblewski, David [Mentor, OH; Katrompas, Alexander M [Concord, OH; Parikh, Neel J [Richmond Heights, OH
2009-09-01
A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.
Implicit knowledge of visual uncertainty guides decisions with asymmetric outcomes.
Whiteley, Louise; Sahani, Maneesh
2008-03-06
Perception is an "inverse problem," in which the state of the world must be inferred from the sensory neural activity that results. However, this inference is both ill-posed (Helmholtz, 1856; Marr, 1982) and corrupted by noise (Green & Swets, 1989), requiring the brain to compute perceptual beliefs under conditions of uncertainty. Here we show that human observers performing a simple visual choice task under an externally imposed loss function approach the optimal strategy, as defined by Bayesian probability and decision theory (Berger, 1985; Cox, 1961). In concert with earlier work, this suggests that observers possess a model of their internal uncertainty and can utilize this model in the neural computations that underlie their behavior (Knill & Pouget, 2004). In our experiment, optimal behavior requires that observers integrate the loss function with an estimate of their internal uncertainty rather than simply requiring that they use a modal estimate of the uncertain stimulus. Crucially, they approach optimal behavior even when denied the opportunity to learn adaptive decision strategies based on immediate feedback. Our data thus support the idea that flexible representations of uncertainty are pre-existing, widespread, and can be propagated to decision-making areas of the brain.
NASA Astrophysics Data System (ADS)
Tsao, Yu-Chung
2016-02-01
This study models a joint location, inventory and preservation decision-making problem for non-instantaneous deteriorating items under delay in payments. An outside supplier provides a credit period to the wholesaler which has a distribution system with distribution centres (DCs). The non-instantaneous deteriorating means no deterioration occurs in the earlier stage, which is very useful for items such as fresh food and fruits. This paper also considers that the deteriorating rate will decrease and the reservation cost will increase as the preservation effort increases. Therefore, how much preservation effort should be made is a crucial decision. The objective of this paper is to determine the optimal locations and number of DCs, the optimal replenishment cycle time at DCs, and the optimal preservation effort simultaneously such that the total network profit is maximised. The problem is formulated as piecewise nonlinear functions and has three different cases. Algorithms based on piecewise nonlinear optimisation are provided to solve the joint location and inventory problem for all cases. Computational analysis illustrates the solution procedures and the impacts of the related parameters on decisions and profits. The results of this study can serve as references for business managers or administrators.
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel
2016-04-01
This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to foresee future inflows depending on present and past hydrological and meteorological variables actually used by the reservoir managers to define likely inflow scenarios. A Decision Support System (DSS) was created coupling the FRB systems and the inflow prediction scheme in order to give the user a set of possible optimal releases in response to the reservoir states at the beginning of the irrigation season and the fuzzy inflow projections made using hydrological and meteorological information. The results show that the optimal DSS created using the FRB operating policies are able to increase the amount of water allocated to the users in 20 to 50 Mm3 per irrigation season with respect to the current policies. Consequently, the mechanism used to define optimal operating rules and transform them into a DSS is able to increase the water deliveries in the Jucar River Basin, combining expert criteria and optimization algorithms in an efficient way. This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) and FEDER funds. It also has received funding from the European Union's Horizon 2020 research and innovation programme under the IMPREX project (grant agreement no: 641.811).
Ahmed, Sameh; Alqurshi, Abdulmalik; Mohamed, Abdel-Maaboud Ismail
2018-07-01
A new robust and reliable high-performance liquid chromatography (HPLC) method with multi-criteria decision making (MCDM) approach was developed to allow simultaneous quantification of atenolol (ATN) and nifedipine (NFD) in content uniformity testing. Felodipine (FLD) was used as an internal standard (I.S.) in this study. A novel marriage between a new interactive response optimizer and a HPLC method was suggested for multiple response optimizations of target responses. An interactive response optimizer was used as a decision and prediction tool for the optimal settings of target responses, according to specified criteria, based on Derringer's desirability. Four independent variables were considered in this study: Acetonitrile%, buffer pH and concentration along with column temperature. Eight responses were optimized: retention times of ATN, NFD, and FLD, resolutions between ATN/NFD and NFD/FLD, and plate numbers for ATN, NFD, and FLD. Multiple regression analysis was applied in order to scan the influences of the most significant variables for the regression models. The experimental design was set to give minimum retention times, maximum resolution and plate numbers. The interactive response optimizer allowed prediction of optimum conditions according to these criteria with a good composite desirability value of 0.98156. The developed method was validated according to the International Conference on Harmonization (ICH) guidelines with the aid of the experimental design. The developed MCDM-HPLC method showed superior robustness and resolution in short analysis time allowing successful simultaneous content uniformity testing of ATN and NFD in marketed capsules. The current work presents an interactive response optimizer as an efficient platform to optimize, predict responses, and validate HPLC methodology with tolerable design space for assay in quality control laboratories. Copyright © 2018 Elsevier B.V. All rights reserved.
Improved multi-objective ant colony optimization algorithm and its application in complex reasoning
NASA Astrophysics Data System (ADS)
Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing
2013-09-01
The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.
Effective Teaching of Economics: A Constrained Optimization Problem?
ERIC Educational Resources Information Center
Hultberg, Patrik T.; Calonge, David Santandreu
2017-01-01
One of the fundamental tenets of economics is that decisions are often the result of optimization problems subject to resource constraints. Consumers optimize utility, subject to constraints imposed by prices and income. As economics faculty, instructors attempt to maximize student learning while being constrained by their own and students'…
USMC Inventory Control Using Optimization Modeling and Discrete Event Simulation
2016-09-01
release. Distribution is unlimited. USMC INVENTORY CONTROL USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION by Timothy A. Curling...USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION 5. FUNDING NUMBERS 6. AUTHOR(S) Timothy A. Curling 7. PERFORMING ORGANIZATION NAME(S...optimization and discrete -event simulation. This construct can potentially provide an effective means in improving order management decisions. However
Algorithms for synthesizing management solutions based on OLAP-technologies
NASA Astrophysics Data System (ADS)
Pishchukhin, A. M.; Akhmedyanova, G. F.
2018-05-01
OLAP technologies are a convenient means of analyzing large amounts of information. An attempt was made in their work to improve the synthesis of optimal management decisions. The developed algorithms allow forecasting the needs and accepted management decisions on the main types of the enterprise resources. Their advantage is the efficiency, based on the simplicity of quadratic functions and differential equations of only the first order. At the same time, the optimal redistribution of resources between different types of products from the assortment of the enterprise is carried out, and the optimal allocation of allocated resources in time. The proposed solutions can be placed on additional specially entered coordinates of the hypercube representing the data warehouse.
Jovanovic, Sasa; Savic, Slobodan; Jovicic, Nebojsa; Boskovic, Goran; Djordjevic, Zorica
2016-09-01
Multi-criteria decision making (MCDM) is a relatively new tool for decision makers who deal with numerous and often contradictory factors during their decision making process. This paper presents a procedure to choose the optimal municipal solid waste (MSW) management system for the area of the city of Kragujevac (Republic of Serbia) based on the MCDM method. Two methods of multiple attribute decision making, i.e. SAW (simple additive weighting method) and TOPSIS (technique for order preference by similarity to ideal solution), respectively, were used to compare the proposed waste management strategies (WMS). Each of the created strategies was simulated using the software package IWM2. Total values for eight chosen parameters were calculated for all the strategies. Contribution of each of the six waste treatment options was valorized. The SAW analysis was used to obtain the sum characteristics for all the waste management treatment strategies and they were ranked accordingly. The TOPSIS method was used to calculate the relative closeness factors to the ideal solution for all the alternatives. Then, the proposed strategies were ranked in form of tables and diagrams obtained based on both MCDM methods. As shown in this paper, the results were in good agreement, which additionally confirmed and facilitated the choice of the optimal MSW management strategy. © The Author(s) 2016.
Aghajani Mir, M; Taherei Ghazvinei, P; Sulaiman, N M N; Basri, N E A; Saheri, S; Mahmood, N Z; Jahan, A; Begum, R A; Aghamohammadi, N
2016-01-15
Selecting a suitable Multi Criteria Decision Making (MCDM) method is a crucial stage to establish a Solid Waste Management (SWM) system. Main objective of the current study is to demonstrate and evaluate a proposed method using Multiple Criteria Decision Making methods (MCDM). An improved version of Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) applied to obtain the best municipal solid waste management method by comparing and ranking the scenarios. Applying this method in order to rank treatment methods is introduced as one contribution of the study. Besides, Viekriterijumsko Kompromisno Rangiranje (VIKOR) compromise solution method applied for sensitivity analyses. The proposed method can assist urban decision makers in prioritizing and selecting an optimized Municipal Solid Waste (MSW) treatment system. Besides, a logical and systematic scientific method was proposed to guide an appropriate decision-making. A modified TOPSIS methodology as a superior to existing methods for first time was applied for MSW problems. Applying this method in order to rank treatment methods is introduced as one contribution of the study. Next, 11 scenarios of MSW treatment methods are defined and compared environmentally and economically based on the waste management conditions. Results show that integrating a sanitary landfill (18.1%), RDF (3.1%), composting (2%), anaerobic digestion (40.4%), and recycling (36.4%) was an optimized model of integrated waste management. An applied decision-making structure provides the opportunity for optimum decision-making. Therefore, the mix of recycling and anaerobic digestion and a sanitary landfill with Electricity Production (EP) are the preferred options for MSW management. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cuypers, Maarten; Lamers, Romy E D; Cornel, Erik B; van de Poll-Franse, Lonneke V; de Vries, Marieke; Kil, Paul J M
2018-04-01
The objective of this study is to test if patients' health-related quality of life (HRQoL) declines after prostate biopsy to detect Pca, and after subsequent treatment decision-making in case Pca is confirmed, and to test whether personality state and traits are associated with these potential changes in HRQoL. Patients who were scheduled for prostate biopsy to detect Pca (N = 377) filled out a baseline questionnaire about HRQoL (EORTC QLQ-C30 and PR25), "big five" personality traits (BFI-10), optimism (LOT-r), and self-efficacy (Decision Self-efficacy Scale) (t0). Patients with confirmed Pca (N = 126) filled out a follow-up questionnaire on HRQoL within 2 weeks after treatment was chosen but had not yet started (t1). HRQoL declined between t0 and t1, reflected in impaired role and cognitive functioning, and elevated fatigue, constipation, and prostate-specific symptoms. Sexual activity and functioning improved. Baseline HRQoL scores were unrelated to the selection of a particular treatment, but for patients who chose a curative treatment, post-decision HRQoL showed a greater decline compared to patients who chose active surveillance. Optimism was associated with HRQoL at baseline; decisional self-efficacy was positively associated with HRQoL at follow-up. No associations between HRQoL and the "big five" personality traits were found. Patients who have undergone prostate biopsy and treatment decision-making for Pca experience a decline in HRQoL. Choosing treatment with a curative intent was associated with greater decline in HRQoL. Interventions aimed at optimism and decision self-efficacy could be helpful to reduce HRQoL impairment around the time of prostate biopsy and treatment decision-making.
NASA Astrophysics Data System (ADS)
Chen, Yizhong; Lu, Hongwei; Li, Jing; Ren, Lixia; He, Li
2017-05-01
This study presents the mathematical formulation and implementations of a synergistic optimization framework based on an understanding of water availability and reliability together with the characteristics of multiple water demands. This framework simultaneously integrates a set of leader-followers-interactive objectives established by different decision makers during the synergistic optimization. The upper-level model (leader's one) determines the optimal pollutants discharge to satisfy the environmental target. The lower-level model (follower's one) accepts the dispatch requirement from the upper-level one and dominates the optimal water-allocation strategy to maximize economic benefits representing the regional authority. The complicated bi-level model significantly improves upon the conventional programming methods through the mutual influence and restriction between the upper- and lower-level decision processes, particularly when limited water resources are available for multiple completing users. To solve the problem, a bi-level interactive solution algorithm based on satisfactory degree is introduced into the decision-making process for measuring to what extent the constraints are met and the objective reaches its optima. The capabilities of the proposed model are illustrated through a real-world case study of water resources management system in the district of Fengtai located in Beijing, China. Feasible decisions in association with water resources allocation, wastewater emission and pollutants discharge would be sequentially generated for balancing the objectives subject to the given water-related constraints, which can enable Stakeholders to grasp the inherent conflicts and trade-offs between the environmental and economic interests. The performance of the developed bi-level model is enhanced by comparing with single-level models. Moreover, in consideration of the uncertainty in water demand and availability, sensitivity analysis and policy analysis are employed for identifying their impacts on the final decisions and improving the practical applications.
NASA Astrophysics Data System (ADS)
Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu; Lee, Hee-Hyol
This paper deals with the building of the reusable reverse logistics model considering the decision of the backorder or the next arrival of goods. The optimization method to minimize the transportation cost and to minimize the volume of the backorder or the next arrival of goods occurred by the Just in Time delivery of the final delivery stage between the manufacturer and the processing center is proposed. Through the optimization algorithms using the priority-based genetic algorithm and the hybrid genetic algorithm, the sub-optimal delivery routes are determined. Based on the case study of a distilling and sale company in Busan in Korea, the new model of the reusable reverse logistics of empty bottles is built and the effectiveness of the proposed method is verified.
Finite grade pheromone ant colony optimization for image segmentation
NASA Astrophysics Data System (ADS)
Yuanjing, F.; Li, Y.; Liangjun, K.
2008-06-01
By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
The application of defaults to optimize parents' health-based choices for children.
Loeb, Katharine L; Radnitz, Cynthia; Keller, Kathleen; Schwartz, Marlene B; Marcus, Sue; Pierson, Richard N; Shannon, Michael; DeLaurentis, Danielle
2017-06-01
Optimal defaults is a compelling model from behavioral economics and the psychology of human decision-making, designed to shape or "nudge" choices in a positive direction without fundamentally restricting options. The current study aimed to test the effectiveness of optimal (less obesogenic) defaults and parent empowerment priming on health-based decisions with parent-child (ages 3-8) dyads in a community-based setting. Two proof-of-concept experiments (one on breakfast food selections and one on activity choice) were conducted comparing the main and interactive effects of optimal versus suboptimal defaults, and parent empowerment priming versus neutral priming, on parents' health-related choices for their children. We hypothesized that in each experiment, making the default option more optimal will lead to more frequent health-oriented choices, and that priming parents to be the ultimate decision-makers on behalf of their child's health will potentiate this effect. Results show that in both studies, default condition, but not priming condition or the interaction between default and priming, significantly predicted choice (healthier vs. less healthy option). There was also a significant main effect for default condition (and no effect for priming condition or the interaction term) on the quantity of healthier food children consumed in the breakfast experiment. These pilot studies demonstrate that optimal defaults can be practicably implemented to improve parents' food and activity choices for young children. Results can inform policies and practices pertaining to obesogenic environmental factors in school, restaurant, and home environments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chen, Cong; Beckman, Robert A
2009-01-01
This manuscript discusses optimal cost-effective designs for Phase II proof of concept (PoC) trials. Unlike a confirmatory registration trial, a PoC trial is exploratory in nature, and sponsors of such trials have the liberty to choose the type I error rate and the power. The decision is largely driven by the perceived probability of having a truly active treatment per patient exposure (a surrogate measure to development cost), which is naturally captured in an efficiency score to be defined in this manuscript. Optimization of the score function leads to type I error rate and power (and therefore sample size) for the trial that is most cost-effective. This in turn leads to cost-effective go-no go criteria for development decisions. The idea is applied to derive optimal trial-level, program-level, and franchise-level design strategies. The study is not meant to provide any general conclusion because the settings used are largely simplified for illustrative purposes. However, through the examples provided herein, a reader should be able to gain useful insight into these design problems and apply them to the design of their own PoC trials.
Zhang, Dezhi; Li, Shuangyan
2014-01-01
This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level. PMID:24977209
Zhang, Dezhi; Li, Shuangyan; Qin, Jin
2014-01-01
This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level.
A Decision Support Model and Tool to Assist Financial Decision-Making in Universities
ERIC Educational Resources Information Center
Bhayat, Imtiaz; Manuguerra, Maurizio; Baldock, Clive
2015-01-01
In this paper, a model and tool is proposed to assist universities and other mission-based organisations to ascertain systematically the optimal portfolio of projects, in any year, meeting the organisations risk tolerances and available funds. The model and tool presented build on previous work on university operations and decision support systems…
An intertemporal decision framework for electrochemical energy storage management
NASA Astrophysics Data System (ADS)
He, Guannan; Chen, Qixin; Moutis, Panayiotis; Kar, Soummya; Whitacre, Jay F.
2018-05-01
Dispatchable energy storage is necessary to enable renewable-based power systems that have zero or very low carbon emissions. The inherent degradation behaviour of electrochemical energy storage (EES) is a major concern for both EES operational decisions and EES economic assessments. Here, we propose a decision framework that addresses the intertemporal trade-offs in terms of EES degradation by deriving, implementing and optimizing two metrics: the marginal benefit of usage and the average benefit of usage. These metrics are independent of the capital cost of the EES system, and, as such, separate the value of EES use from the initial cost, which provides a different perspective on storage valuation and operation. Our framework is proved to produce the optimal solution for EES life-cycle profit maximization. We show that the proposed framework offers effective ways to assess the economic values of EES, to make investment decisions for various applications and to inform related subsidy policies.
“UTILIZING” SIGNAL DETECTION THEORY
Lynn, Spencer K.; Barrett, Lisa Feldman
2014-01-01
What do inferring what a person is thinking or feeling, deciding to report a symptom to your doctor, judging a defendant’s guilt, and navigating a dimly lit room have in common? They involve perceptual uncertainty (e.g., a scowling face might indicate anger or concentration, which engender different appropriate responses), and behavioral risk (e.g., a cost to making the wrong response). Signal detection theory describes these types of decisions. In this tutorial we show how, by incorporating the economic concept of utility, signal detection theory serves as a model of optimal decision making, beyond its common use as an analytic method. This utility approach to signal detection theory highlights potentially enigmatic influences of perceptual uncertainty on measures of decision-making performance (accuracy and optimality) and on behavior (a functional relationship between bias and sensitivity). A “utilized” signal detection theory offers the possibility of expanding the phenomena that can be understood within a decision-making framework. PMID:25097061
Adaptive decision rules for the acquisition of nature reserves.
Turner, Will R; Wilcove, David S
2006-04-01
Although reserve-design algorithms have shown promise for increasing the efficiency of conservation planning, recent work casts doubt on the usefulness of some of these approaches in practice. Using three data sets that vary widely in size and complexity, we compared various decision rules for acquiring reserve networks over multiyear periods. We explored three factors that are often important in real-world conservation efforts: uncertain availability of sites for acquisition, degradation of sites, and overall budget constraints. We evaluated the relative strengths and weaknesses of existing optimal and heuristic decision rules and developed a new set of adaptive decision rules that combine the strengths of existing optimal and heuristic approaches. All three of the new adaptive rules performed better than the existing rules we tested under virtually all scenarios of site availability, site degradation, and budget constraints. Moreover, the adaptive rules required no additional data beyond what was readily available and were relatively easy to compute.
NASA Astrophysics Data System (ADS)
Yen, Ghi-Feng; Chung, Kun-Jen; Chen, Tzung-Ching
2012-11-01
The traditional economic order quantity model assumes that the retailer's storage capacity is unlimited. However, as we all know, the capacity of any warehouse is limited. In practice, there usually exist various factors that induce the decision-maker of the inventory system to order more items than can be held in his/her own warehouse. Therefore, for the decision-maker, it is very practical to determine whether or not to rent other warehouses. In this article, we try to incorporate two levels of trade credit and two separate warehouses (own warehouse and rented warehouse) to establish a new inventory model to help the decision-maker to make the decision. Four theorems are provided to determine the optimal cycle time to generalise some existing articles. Finally, the sensitivity analysis is executed to investigate the effects of the various parameters on ordering policies and annual costs of the inventory system.
Goal-oriented Site Characterization in Hydrogeological Applications: An Overview
NASA Astrophysics Data System (ADS)
Nowak, W.; de Barros, F.; Rubin, Y.
2011-12-01
In this study, we address the importance of goal-oriented site characterization. Given the multiple sources of uncertainty in hydrogeological applications, information needs of modeling, prediction and decision support should be satisfied with efficient and rational field campaigns. In this work, we provide an overview of an optimal sampling design framework based on Bayesian decision theory, statistical parameter inference and Bayesian model averaging. It optimizes the field sampling campaign around decisions on environmental performance metrics (e.g., risk, arrival times, etc.) while accounting for parametric and model uncertainty in the geostatistical characterization, in forcing terms, and measurement error. The appealing aspects of the framework lie on its goal-oriented character and that it is directly linked to the confidence in a specified decision. We illustrate how these concepts could be applied in a human health risk problem where uncertainty from both hydrogeological and health parameters are accounted.
A supplier selection and order allocation problem with stochastic demands
NASA Astrophysics Data System (ADS)
Zhou, Yun; Zhao, Lei; Zhao, Xiaobo; Jiang, Jianhua
2011-08-01
We consider a system comprising a retailer and a set of candidate suppliers that operates within a finite planning horizon of multiple periods. The retailer replenishes its inventory from the suppliers and satisfies stochastic customer demands. At the beginning of each period, the retailer makes decisions on the replenishment quantity, supplier selection and order allocation among the selected suppliers. An optimisation problem is formulated to minimise the total expected system cost, which includes an outer level stochastic dynamic program for the optimal replenishment quantity and an inner level integer program for supplier selection and order allocation with a given replenishment quantity. For the inner level subproblem, we develop a polynomial algorithm to obtain optimal decisions. For the outer level subproblem, we propose an efficient heuristic for the system with integer-valued inventory, based on the structural properties of the system with real-valued inventory. We investigate the efficiency of the proposed solution approach, as well as the impact of parameters on the optimal replenishment decision with numerical experiments.
Artificial Intelligence based technique for BTS placement
NASA Astrophysics Data System (ADS)
Alenoghena, C. O.; Emagbetere, J. O.; Aibinu, A. M.
2013-12-01
The increase of the base transceiver station (BTS) in most urban areas can be traced to the drive by network providers to meet demand for coverage and capacity. In traditional network planning, the final decision of BTS placement is taken by a team of radio planners, this decision is not fool proof against regulatory requirements. In this paper, an intelligent based algorithm for optimal BTS site placement has been proposed. The proposed technique takes into consideration neighbour and regulation considerations objectively while determining cell site. The application will lead to a quantitatively unbiased evaluated decision making process in BTS placement. An experimental data of a 2km by 3km territory was simulated for testing the new algorithm, results obtained show a 100% performance of the neighbour constrained algorithm in BTS placement optimization. Results on the application of GA with neighbourhood constraint indicate that the choices of location can be unbiased and optimization of facility placement for network design can be carried out.
NASA Astrophysics Data System (ADS)
Roy, S. G.; Gold, A.; Uchida, E.; McGreavy, B.; Smith, S. M.; Wilson, K.; Blachly, B.; Newcomb, A.; Hart, D.; Gardner, K.
2017-12-01
Dam removal has become a cornerstone of environmental restoration practice in the United States. One outcome of dam removal that has received positive attention is restored access to historic habitat for sea-run fisheries, providing a crucial gain in ecosystem resilience. But dams also provide stakeholders with valuable services, and uncertain socio-ecological outcomes can arise if there is not careful consideration of the basin scale trade offs caused by dam removal. In addition to fisheries, dam removals can significantly affect landscape nutrient flux, municipal water storage, recreational use of lakes and rivers, property values, hydroelectricity generation, the cultural meaning of dams, and many other river-based ecosystem services. We use a production possibility frontiers approach to explore dam decision scenarios and opportunities for trading between ecosystem services that are positively or negatively affected by dam removal in New England. Scenarios that provide efficient trade off potentials are identified using a multiobjective genetic algorithm. Our results suggest that for many river systems, there is a significant potential to increase the value of fisheries and other ecosystem services with minimal dam removals, and further increases are possible by including decisions related to dam operations and physical modifications. Run-of-river dams located near the head of tide are often found to be optimal for removal due to low hydroelectric capacity and high impact on fisheries. Conversely, dams with large impoundments near a river's headwaters can be less optimal for dam removal because their value as nitrogen sinks often outweighs the potential value for fisheries. Hydropower capacity is negatively impacted by dam removal but there are opportunities to meet or exceed lost capacity by upgrading preserved hydropower dams. Improving fish passage facilities for dams that are critical for safety or water storage can also reduce impacts on fisheries. Our method is helpful for identifying efficient decision scenarios, but finding the optimal decision requires a deep and mutual understanding of stakeholder preferences. We outline how to interpret these preferences, identify overlaps with the efficient decision scenarios, and estimate the monetary budget required to act on these decisions.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
Optimization of PSA screening policies: a comparison of the patient and societal perspectives.
Zhang, Jingyu; Denton, Brian T; Balasubramanian, Hari; Shah, Nilay D; Inman, Brant A
2012-01-01
To estimate the benefit of PSA-based screening for prostate cancer from the patient and societal perspectives. A partially observable Markov decision process model was used to optimize PSA screening decisions. Age-specific prostate cancer incidence rates and the mortality rates from prostate cancer and competing causes were considered. The model trades off the potential benefit of early detection with the cost of screening and loss of patient quality of life due to screening and treatment. PSA testing and biopsy decisions are made based on the patient's probability of having prostate cancer. Probabilities are inferred based on the patient's complete PSA history using Bayesian updating. The results of all PSA tests and biopsies done in Olmsted County, Minnesota, from 1993 to 2005 (11,872 men and 50,589 PSA test results). Patients' perspective: to maximize expected quality-adjusted life years (QALYs); societal perspective: to maximize the expected monetary value based on societal willingness to pay for QALYs and the cost of PSA testing, prostate biopsies, and treatment. From the patient perspective, the optimal policy recommends stopping PSA testing and biopsy at age 76. From the societal perspective, the stopping age is 71. The expected incremental benefit of optimal screening over the traditional guideline of annual PSA screening with threshold 4.0 ng/mL for biopsy is estimated to be 0.165 QALYs per person from the patient perspective and 0.161 QALYs per person from the societal perspective. PSA screening based on traditional guidelines is found to be worse than no screening at all. PSA testing done with traditional guidelines underperforms and therefore underestimates the potential benefit of screening. Optimal screening guidelines differ significantly depending on the perspective of the decision maker.
2013-01-01
Background The main aim of China’s Health Care System Reform was to help the decision maker find the optimal solution to China’s institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China’s health care system, and it could efficiently collect the data for determining the optimal solution to China’s institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts’ views into various optimal solutions to this problem under the support of this pilot system. Methods After the general framework of China’s institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. Results The market-oriented health care provider approach was the optimal solution to China’s institutional problem of health care provider selection from the doctors’ point of view; the traditional government’s regulation-oriented health care provider approach was the optimal solution to China’s institutional problem of health care provider selection from the pharmacists’ point of view, the hospital administrators’ point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China’s institutional problem of health care provider selection from the nurses’ point of view, the point of view of officials in medical insurance agencies, and the health care researchers’ point of view. Conclusions The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts’ views into various optimal solutions to China’s institutional problem of health care provider selection. PMID:23557082
Two-Stage Fracturing Wastewater Management in Shale Gas Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiaodong; Sun, Alexander Y.; Duncan, Ian J.
Here, management of shale gas wastewater treatment, disposal, and reuse has become a significant environmental challenge, driven by an ongoing boom in development of U.S. shale gas reservoirs. Systems-analysis based decision support is helpful for effective management of wastewater, and provision of cost-effective decision alternatives from a whole-system perspective. Uncertainties are inherent in many modeling parameters, affecting the generated decisions. In order to effectively deal with the recourse issue in decision making, in this work a two-stage stochastic fracturing wastewater management model, named TSWM, is developed to provide decision support for wastewater management planning in shale plays. Using the TSWMmore » model, probabilistic and nonprobabilistic uncertainties are effectively handled. The TSWM model provides flexibility in generating shale gas wastewater management strategies, in which the first-stage decision predefined by decision makers before uncertainties are unfolded is corrected in the second stage to achieve the whole-system’s optimality. Application of the TSWM model to a comprehensive synthetic example demonstrates its practical applicability and feasibility. Optimal results are generated for allowable wastewater quantities, excess wastewater, and capacity expansions of hazardous wastewater treatment plants to achieve the minimized total system cost. The obtained interval solutions encompass both optimistic and conservative decisions. Trade-offs between economic and environmental objectives are made depending on decision makers’ knowledge and judgment, as well as site-specific information. In conclusion, the proposed model is helpful in forming informed decisions for wastewater management associated with shale gas development.« less
Two-Stage Fracturing Wastewater Management in Shale Gas Development
Zhang, Xiaodong; Sun, Alexander Y.; Duncan, Ian J.; ...
2017-01-19
Here, management of shale gas wastewater treatment, disposal, and reuse has become a significant environmental challenge, driven by an ongoing boom in development of U.S. shale gas reservoirs. Systems-analysis based decision support is helpful for effective management of wastewater, and provision of cost-effective decision alternatives from a whole-system perspective. Uncertainties are inherent in many modeling parameters, affecting the generated decisions. In order to effectively deal with the recourse issue in decision making, in this work a two-stage stochastic fracturing wastewater management model, named TSWM, is developed to provide decision support for wastewater management planning in shale plays. Using the TSWMmore » model, probabilistic and nonprobabilistic uncertainties are effectively handled. The TSWM model provides flexibility in generating shale gas wastewater management strategies, in which the first-stage decision predefined by decision makers before uncertainties are unfolded is corrected in the second stage to achieve the whole-system’s optimality. Application of the TSWM model to a comprehensive synthetic example demonstrates its practical applicability and feasibility. Optimal results are generated for allowable wastewater quantities, excess wastewater, and capacity expansions of hazardous wastewater treatment plants to achieve the minimized total system cost. The obtained interval solutions encompass both optimistic and conservative decisions. Trade-offs between economic and environmental objectives are made depending on decision makers’ knowledge and judgment, as well as site-specific information. In conclusion, the proposed model is helpful in forming informed decisions for wastewater management associated with shale gas development.« less
The effects of aging on the speed-accuracy compromise: Boundary optimality in the diffusion model.
Starns, Jeffrey J; Ratcliff, Roger
2010-06-01
We evaluated age-related differences in the optimality of decision boundary settings in a diffusion model analysis. In the model, the width of the decision boundary represents the amount of evidence that must accumulate in favor of a response alternative before a decision is made. Wide boundaries lead to slow but accurate responding, and narrow boundaries lead to fast but inaccurate responding. There is a single value of boundary separation that produces the most correct answers in a given period of time, and we refer to this value as the reward rate optimal boundary (RROB). We consistently found across a variety of decision tasks that older adults used boundaries that were much wider than the RROB value. Young adults used boundaries that were closer to the RROB value, although age differences in optimality were smaller with instructions emphasizing speed than with instructions emphasizing accuracy. Young adults adjusted their boundary settings to more closely approach the RROB value when they were provided with accuracy feedback and extensive practice. Older participants showed no evidence of making boundary adjustments in response to feedback or task practice, and they consistently used boundary separation values that produced accuracy levels that were near asymptote. Our results suggest that young adults attempt to balance speed and accuracy to achieve the most correct answers per unit time, whereas older adultts attempt to minimize errors even if they must respond quite slowly to do so. (c) 2010 APA, all rights reserved
2017-03-01
RECRUITING WITH THE NEW PLANNED RESOURCE OPTIMIZATION MODEL WITH EXPERIMENTAL DESIGN (PROM-WED) by Allison R. Hogarth March 2017 Thesis...with the New Planned Resource Optimization Model With Experimental Design (PROM-WED) 5. FUNDING NUMBERS 6. AUTHOR(S) Allison R. Hogarth 7. PERFORMING...has historically used a non -linear optimization model, the Planned Resource Optimization (PRO) model, to help inform decisions on the allocation of
Bertsimas, Dimitris; Silberholz, John; Trikalinos, Thomas
2018-03-01
Important decisions related to human health, such as screening strategies for cancer, need to be made without a satisfactory understanding of the underlying biological and other processes. Rather, they are often informed by mathematical models that approximate reality. Often multiple models have been made to study the same phenomenon, which may lead to conflicting decisions. It is natural to seek a decision making process that identifies decisions that all models find to be effective, and we propose such a framework in this work. We apply the framework in prostate cancer screening to identify prostate-specific antigen (PSA)-based strategies that perform well under all considered models. We use heuristic search to identify strategies that trade off between optimizing the average across all models' assessments and being "conservative" by optimizing the most pessimistic model assessment. We identified three recently published mathematical models that can estimate quality-adjusted life expectancy (QALE) of PSA-based screening strategies and identified 64 strategies that trade off between maximizing the average and the most pessimistic model assessments. All prescribe PSA thresholds that increase with age, and 57 involve biennial screening. Strategies with higher assessments with the pessimistic model start screening later, stop screening earlier, and use higher PSA thresholds at earlier ages. The 64 strategies outperform 22 previously published expert-generated strategies. The 41 most "conservative" ones remained better than no screening with all models in extensive sensitivity analyses. We augment current comparative modeling approaches by identifying strategies that perform well under all models, for various degrees of decision makers' conservativeness.
Dynamic remapping decisions in multi-phase parallel computations
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Reynolds, P. F., Jr.
1986-01-01
The effectiveness of any given mapping of workload to processors in a parallel system is dependent on the stochastic behavior of the workload. Program behavior is often characterized by a sequence of phases, with phase changes occurring unpredictably. During a phase, the behavior is fairly stable, but may become quite different during the next phase. Thus a workload assignment generated for one phase may hinder performance during the next phase. We consider the problem of deciding whether to remap a paralled computation in the face of uncertainty in remapping's utility. Fundamentally, it is necessary to balance the expected remapping performance gain against the delay cost of remapping. This paper treats this problem formally by constructing a probabilistic model of a computation with at most two phases. We use stochastic dynamic programming to show that the remapping decision policy which minimizes the expected running time of the computation has an extremely simple structure: the optimal decision at any step is followed by comparing the probability of remapping gain against a threshold. This theoretical result stresses the importance of detecting a phase change, and assessing the possibility of gain from remapping. We also empirically study the sensitivity of optimal performance to imprecise decision threshold. Under a wide range of model parameter values, we find nearly optimal performance if remapping is chosen simply when the gain probability is high. These results strongly suggest that except in extreme cases, the remapping decision problem is essentially that of dynamically determining whether gain can be achieved by remapping after a phase change; precise quantification of the decision model parameters is not necessary.
A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning
NASA Astrophysics Data System (ADS)
Basdekas, L.; Stewart, N.; Triana, E.
2013-12-01
Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU evaluate tradeoffs in a continually changing world.
Ma, Wei Ji; Shen, Shan; Dziugaite, Gintare; van den Berg, Ronald
2015-11-01
In tasks such as visual search and change detection, a key question is how observers integrate noisy measurements from multiple locations to make a decision. Decision rules proposed to model this process have fallen into two categories: Bayes-optimal (ideal observer) rules and ad-hoc rules. Among the latter, the maximum-of-outputs (max) rule has been the most prominent. Reviewing recent work and performing new model comparisons across a range of paradigms, we find that in all cases except for one, the optimal rule describes human data as well as or better than every max rule either previously proposed or newly introduced here. This casts doubt on the utility of the max rule for understanding perceptual decision-making. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimal Resource Allocation in Library Systems
ERIC Educational Resources Information Center
Rouse, William B.
1975-01-01
Queueing theory is used to model processes as either waiting or balking processes. The optimal allocation of resources to these processes is defined as that which maximizes the expected value of the decision-maker's utility function. (Author)
Strategy for optimum acquisition of information
DOT National Transportation Integrated Search
2006-10-01
This note is a brief tutorial on a strategy optimizing the acquisition of information. It is a procedure well know to decision theorists but hardly understood or applied by those making decisions about spending dollars, time and other forms of capita...
Hennin, Holly L; Bêty, Jöel; Legagneux, Pierre; Gilchrist, H Grant; Williams, Tony D; Love, Oliver P
2016-10-01
The influence of variation in individual state on key reproductive decisions impacting fitness is well appreciated in evolutionary ecology. Rowe et al. (1994) developed a condition-dependent individual optimization model predicting that three key factors impact the ability of migratory female birds to individually optimize breeding phenology to maximize fitness in seasonal environments: arrival condition, arrival date, and ability to gain in condition on the breeding grounds. While empirical studies have confirmed that greater arrival body mass and earlier arrival dates result in earlier laying, no study has assessed whether individual variation in energetic management of condition gain effects this key fitness-related decision. Using an 8-year data set from over 350 prebreeding female Arctic common eiders (Somateria mollissima), we tested this component of the model by examining whether individual variation in two physiological traits influencing energetic management (plasma triglycerides: physiological fattening rate; baseline corticosterone: energetic demand) predicted individual variation in breeding phenology after controlling for arrival date and body mass. As predicted by the optimization model, individuals with higher fattening rates and lower energetic demand had the earliest breeding phenology (shortest delays between arrival and laying; earliest laying dates). Our results are the first to empirically determine that individual flexibility in prebreeding energetic management influences key fitness-related reproductive decisions, suggesting that individuals have the capacity to optimally manage reproductive investment.
Clery, Stephane; Cumming, Bruce G; Nienborg, Hendrikje
2017-01-18
Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal "noise" correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple predictions to differentiate decoding schemes, and found support for optimal linear readout of early sensory populations with information-limiting correlations. Here, we observed decision-related activity for neurons in visual area V2 of macaques performing fine disparity discrimination, as yet the earliest site for this task. These findings, and previously reported results from V2 in a different task, deviated from the predictions for optimal linear readout of a population with information-limiting correlations. Our results suggest that optimal linear decoding of early sensory information is not a general decoding strategy used by the brain. Copyright © 2017 the authors 0270-6474/17/370715-11$15.00/0.
Sensitivity-Based Guided Model Calibration
NASA Astrophysics Data System (ADS)
Semnani, M.; Asadzadeh, M.
2017-12-01
A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.
Collaboration pathway(s) using new tools for optimizing operational climate monitoring from space
NASA Astrophysics Data System (ADS)
Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.
2014-10-01
Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the needs of decision makers, scientific investigators and global users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent (2014) rulebased decision engine modeling runs that targeted optimizing the intended NPOESS architecture, becomes a surrogate for global operational climate monitoring architecture(s). This rule-based systems tools provide valuable insight for Global climate architectures, through the comparison and evaluation of alternatives considered and the exhaustive range of trade space explored. A representative optimization of Global ECV's (essential climate variables) climate monitoring architecture(s) is explored and described in some detail with thoughts on appropriate rule-based valuations. The optimization tools(s) suggest and support global collaboration pathways and hopefully elicit responses from the audience and climate science shareholders.
The Application of Optimal Defaults to Improve Elementary School Lunch Selections: Proof of Concept
ERIC Educational Resources Information Center
Loeb, Katharine L.; Radnitz, Cynthia; Keller, Kathleen L.; Schwartz, Marlene B.; Zucker, Nancy; Marcus, Sue; Pierson, Richard N.; Shannon, Michael; DeLaurentis, Danielle
2018-01-01
Background: In this study, we applied behavioral economics to optimize elementary school lunch choices via parent-driven decisions. Specifically, this experiment tested an optimal defaults paradigm, examining whether strategically manipulating the health value of a default menu could be co-opted to improve school-based lunch selections. Methods:…
USDA-ARS?s Scientific Manuscript database
An improved ant colony optimization (ACO) formulation for the allocation of crops and water to different irrigation areas is developed. The formulation enables dynamic adjustment of decision variable options and makes use of visibility factors (VFs, the domain knowledge that can be used to identify ...
Spatial optimization of prairie dog colonies for black-footed ferret recovery
Michael Bevers; John G. Hof; Daniel W. Uresk; Gregory L. Schenbeck
1997-01-01
A discrete-time reaction-diffusion model for black-footed ferret release, population growth, and dispersal is combined with ferret carrying capacity constraints based on prairie dog population management decisions to form a spatial optimization model. Spatial arrangement of active prairie dog colonies within a ferret reintroduction area is optimized over time for...
Energy-Water Nexus: Balancing the Tradeoffs between Two-Level Decision Makers
Zhang, Xiaodong; Vesselinov, Velimir Valentinov
2016-09-03
Energy-water nexus has substantially increased importance in the recent years. Synergistic approaches based on systems-analysis and mathematical models are critical for helping decision makers better understand the interrelationships and tradeoffs between energy and water. In energywater nexus management, various decision makers with different goals and preferences, which are often conflicting, are involved. These decision makers may have different controlling power over the management objectives and the decisions. They make decisions sequentially from the upper level to the lower level, challenging decision making in energy-water nexus. In order to address such planning issues, a bi-level decision model is developed, which improvesmore » upon the existing studies by integration of bi-level programming into energy-water nexus management. The developed model represents a methodological contribution to the challenge of sequential decisionmaking in energy-water nexus through provision of an integrated modeling framework/tool. An interactive fuzzy optimization methodology is introduced to seek a satisfactory solution to meet the overall satisfaction of the two-level decision makers. The tradeoffs between the two-level decision makers in energy-water nexus management are effectively addressed and quantified. Application of the proposed model to a synthetic example problem has demonstrated its applicability in practical energy-water nexus management. Optimal solutions for electricity generation, fuel supply, water supply including groundwater, surface water and recycled water, capacity expansion of the power plants, and GHG emission control are generated. In conclusion, these analyses are capable of helping decision makers or stakeholders adjust their tolerances to make informed decisions to achieve the overall satisfaction of energy-water nexus management where bi-level sequential decision making process is involved.« less
Energy-Water Nexus: Balancing the Tradeoffs between Two-Level Decision Makers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiaodong; Vesselinov, Velimir Valentinov
Energy-water nexus has substantially increased importance in the recent years. Synergistic approaches based on systems-analysis and mathematical models are critical for helping decision makers better understand the interrelationships and tradeoffs between energy and water. In energywater nexus management, various decision makers with different goals and preferences, which are often conflicting, are involved. These decision makers may have different controlling power over the management objectives and the decisions. They make decisions sequentially from the upper level to the lower level, challenging decision making in energy-water nexus. In order to address such planning issues, a bi-level decision model is developed, which improvesmore » upon the existing studies by integration of bi-level programming into energy-water nexus management. The developed model represents a methodological contribution to the challenge of sequential decisionmaking in energy-water nexus through provision of an integrated modeling framework/tool. An interactive fuzzy optimization methodology is introduced to seek a satisfactory solution to meet the overall satisfaction of the two-level decision makers. The tradeoffs between the two-level decision makers in energy-water nexus management are effectively addressed and quantified. Application of the proposed model to a synthetic example problem has demonstrated its applicability in practical energy-water nexus management. Optimal solutions for electricity generation, fuel supply, water supply including groundwater, surface water and recycled water, capacity expansion of the power plants, and GHG emission control are generated. In conclusion, these analyses are capable of helping decision makers or stakeholders adjust their tolerances to make informed decisions to achieve the overall satisfaction of energy-water nexus management where bi-level sequential decision making process is involved.« less
A stochastic discrete optimization model for designing container terminal facilities
NASA Astrophysics Data System (ADS)
Zukhruf, Febri; Frazila, Russ Bona; Burhani, Jzolanda Tsavalista
2017-11-01
As uncertainty essentially affect the total transportation cost, it remains important in the container terminal that incorporates several modes and transshipments process. This paper then presents a stochastic discrete optimization model for designing the container terminal, which involves the decision of facilities improvement action. The container terminal operation model is constructed by accounting the variation of demand and facilities performance. In addition, for illustrating the conflicting issue that practically raises in the terminal operation, the model also takes into account the possible increment delay of facilities due to the increasing number of equipment, especially the container truck. Those variations expectantly reflect the uncertainty issue in the container terminal operation. A Monte Carlo simulation is invoked to propagate the variations by following the observed distribution. The problem is constructed within the framework of the combinatorial optimization problem for investigating the optimal decision of facilities improvement. A new variant of glow-worm swarm optimization (GSO) is thus proposed for solving the optimization, which is rarely explored in the transportation field. The model applicability is tested by considering the actual characteristics of the container terminal.
Woodard, Terri L; Hoffman, Aubri S; Covarrubias, Laura A; Holman, Deborah; Schover, Leslie; Bradford, Andrea; Hoffman, Derek B; Mathur, Aakrati; Thomas, Jerah; Volk, Robert J
2018-02-01
To improve survivors' awareness and knowledge of fertility preservation counseling and treatment options, this study engaged survivors and providers to design, develop, and field-test Pathways: a fertility preservation patient decision aid website for young women with cancer©. Using an adapted user-centered design process, our stakeholder advisory group and research team designed and optimized the Pathways patient decision aid website through four iterative cycles of review and revision with clinicians (n = 21) and survivors (n = 14). Field-testing (n = 20 survivors) assessed post-decision aid scores on the Fertility Preservation Knowledge Scale, feasibility of assessing women's decision-making values while using the website, and website usability/acceptability ratings. Iterative stakeholder engagement optimized the Pathways decision aid website to meet survivors' and providers' needs, including providing patient-friendly information and novel features such as interactive value clarification exercises, testimonials that model shared decision making, financial/referral resources, and a printable personal summary. Survivors scored an average of 8.2 out of 13 (SD 1.6) on the Fertility Preservation Knowledge Scale. They rated genetic screening and having a biological child as strong factors in their decision-making, and 71% indicated a preference for egg freezing. Most women (> 85%) rated Pathways favorably, and all women (100%) said they would recommend it to other women. The Pathways decision aid is a usable and acceptable tool to help women learn about fertility preservation. The Pathways decision aid may help women make well-informed values-based decisions and prevent future infertility-related distress.
Heuristic-based information acquisition and decision making among pilots.
Wiggins, Mark W; Bollwerk, Sandra
2006-01-01
This research was designed to examine the impact of heuristic-based approaches to the acquisition of task-related information on the selection of an optimal alternative during simulated in-flight decision making. The work integrated features of naturalistic and normative decision making and strategies of information acquisition within a computer-based, decision support framework. The study comprised two phases, the first of which involved familiarizing pilots with three different heuristic-based strategies of information acquisition: frequency, elimination by aspects, and majority of confirming decisions. The second stage enabled participants to choose one of the three strategies of information acquisition to resolve a fourth (choice) scenario. The results indicated that task-oriented experience, rather than the information acquisition strategies, predicted the selection of the optimal alternative. It was also evident that of the three strategies available, the elimination by aspects information acquisition strategy was preferred by most participants. It was concluded that task-oriented experience, rather than the process of information acquisition, predicted task accuracy during the decision-making task. It was also concluded that pilots have a preference for one particular approach to information acquisition. Applications of outcomes of this research include the development of decision support systems that adapt to the information-processing capabilities and preferences of users.
Application of risk analysis in water resourses management
NASA Astrophysics Data System (ADS)
Varouchakis, Emmanouil; Palogos, Ioannis
2017-04-01
A common cost-benefit analysis approach, which is novel in the risk analysis of hydrologic/hydraulic applications, and a Bayesian decision analysis are applied to aid the decision making on whether or not to construct a water reservoir for irrigation purposes. The alternative option examined is a scaled parabolic fine variation in terms of over-pumping violations in contrast to common practices that usually consider short-term fines. Such an application, and in such detail, represents new feedback. The results indicate that the probability uncertainty is the driving issue that determines the optimal decision with each methodology, and depending on the unknown probability handling, each methodology may lead to a different optimal decision. Thus, the proposed tool can help decision makers (stakeholders) to examine and compare different scenarios using two different approaches before making a decision considering the cost of a hydrologic/hydraulic project and the varied economic charges that water table limit violations can cause inside an audit interval. In contrast to practices that assess the effect of each proposed action separately considering only current knowledge of the examined issue, this tool aids decision making by considering prior information and the sampling distribution of future successful audits. This tool is developed in a web service for the easier stakeholders' access.
A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker.
Leibfried, Felix; Braun, Daniel A
2015-08-01
Rate distortion theory describes how to communicate relevant information most efficiently over a channel with limited capacity. One of the many applications of rate distortion theory is bounded rational decision making, where decision makers are modeled as information channels that transform sensory input into motor output under the constraint that their channel capacity is limited. Such a bounded rational decision maker can be thought to optimize an objective function that trades off the decision maker's utility or cumulative reward against the information processing cost measured by the mutual information between sensory input and motor output. In this study, we interpret a spiking neuron as a bounded rational decision maker that aims to maximize its expected reward under the computational constraint that the mutual information between the neuron's input and output is upper bounded. This abstract computational constraint translates into a penalization of the deviation between the neuron's instantaneous and average firing behavior. We derive a synaptic weight update rule for such a rate distortion optimizing neuron and show in simulations that the neuron efficiently extracts reward-relevant information from the input by trading off its synaptic strengths against the collected reward.
Analysis and optimization of hybrid electric vehicle thermal management systems
NASA Astrophysics Data System (ADS)
Hamut, H. S.; Dincer, I.; Naterer, G. F.
2014-02-01
In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.
Local Approximation and Hierarchical Methods for Stochastic Optimization
NASA Astrophysics Data System (ADS)
Cheng, Bolong
In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the PJM Interconnect and show that it outperforms the baseline approach used in the industry.
Reamer, Elyse; Yang, Felix; Holmes-Rovner, Margaret; Liu, Joe; Xu, Jinping
2017-01-01
Optimal treatment for localized prostate cancer (LPC) is controversial. We assessed the effects of personality, specialists seen, and involvement of spouse, family, or friends on treatment decision/decision-making qualities. We surveyed a population-based sample of men ≤ 75 years with newly diagnosed LPC about treatment choice, reasons for the choice, decision-making difficulty, satisfaction, and regret. Of 160 men (71 black, 89 white), with a mean age of 61 (±7.3) years, 59% chose surgery, 31% chose radiation, and 10% chose active surveillance (AS)/watchful waiting (WW). Adjusting for age, race, comorbidity, tumor risk level, and treatment status, men who consulted friends during decision-making were more likely to choose curative treatment (radiation or surgery) than WW/AS (OR = 11.1, p < 0.01; 8.7, p < 0.01). Men who saw a radiation oncologist in addition to a urologist were more likely to choose radiation than surgery (OR = 6.0, p = 0.04). Men who consulted family or friends (OR = 2.6, p < 0.01; 3.7, p < 0.01) experienced greater decision-making difficulty. No personality traits (pessimism, optimism, or faith) were associated with treatment choice/decision-making quality measures. In addition to specialist seen, consulting friends increased men's likelihood of choosing curative treatment. Consulting family or friends increased decision-making difficulty.
Optimal design and use of retry in fault tolerant real-time computer systems
NASA Technical Reports Server (NTRS)
Lee, Y. H.; Shin, K. G.
1983-01-01
A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.
Derivative Trade Optimizing Model Utilizing GP Based on Behavioral Finance Theory
NASA Astrophysics Data System (ADS)
Matsumura, Koki; Kawamoto, Masaru
This paper proposed a new technique which makes the strategy trees for the derivative (option) trading investment decision based on the behavioral finance theory and optimizes it using evolutionary computation, in order to achieve high profitability. The strategy tree uses a technical analysis based on a statistical, experienced technique for the investment decision. The trading model is represented by various technical indexes, and the strategy tree is optimized by the genetic programming(GP) which is one of the evolutionary computations. Moreover, this paper proposed a method using the prospect theory based on the behavioral finance theory to set psychological bias for profit and deficit and attempted to select the appropriate strike price of option for the higher investment efficiency. As a result, this technique produced a good result and found the effectiveness of this trading model by the optimized dealings strategy.
NASA Technical Reports Server (NTRS)
Paudel, Krishna P.; Limaye, Ashutosh; Hatch, Upton; Cruise, James; Musleh, Fuad
2005-01-01
We developed a dynamic model to optimize irrigation application in three major crops (corn, cotton and peanuts) grown in the Southeast USA. Water supply amount is generated from an engineering model which is then combined with economic models to find the optimal amount of irrigation water to apply on each crop field during the six critical water deficit weeks in summer. Results indicate that water is applied on the crop with the highest marginal value product of irrigation. Decision making tool such as the one developed here would help farmers and policy makers to find the maximum profitable solution when water shortage is a serious concern.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less
Form and Objective of the Decision Rule in Absolute Identification
NASA Technical Reports Server (NTRS)
Balakrishnan, J. D.
1997-01-01
In several conditions of a line length identification experiment, the subjects' decision making strategies were systematically biased against the responses on the edges of the stimulus range. When the range and number of the stimuli were small, the bias caused the percentage of correct responses to be highest in the center and lowest on the extremes of the range. Two general classes of decision rules that would explain these results are considered. The first class assumes that subjects intend to adopt an optimal decision rule, but systematically misrepresent one or more parameters of the decision making context. The second class assumes that subjects use a different measure of performance than the one assumed by the experimenter: instead of maximizing the chances of a correct response, the subject attempts to minimize the expected size of the response error (a "fidelity criterion"). In a second experiment, extended experience and feedback did not diminish the bias effect, but explicitly penalizing all response errors equally, regardless of their size, did reduce or eliminate it in some subjects. Both results favor the fidelity criterion over the optimal rule.
Clinical errors that can occur in the treatment decision-making process in psychotherapy.
Park, Jake; Goode, Jonathan; Tompkins, Kelley A; Swift, Joshua K
2016-09-01
Clinical errors occur in the psychotherapy decision-making process whenever a less-than-optimal treatment or approach is chosen when working with clients. A less-than-optimal approach may be one that a client is unwilling to try or fully invest in based on his/her expectations and preferences, or one that may have little chance of success based on contraindications and/or limited research support. The doctor knows best and the independent choice models are two decision-making models that are frequently used within psychology, but both are associated with an increased likelihood of errors in the treatment decision-making process. In particular, these models fail to integrate all three components of the definition of evidence-based practice in psychology (American Psychological Association, 2006). In this article we describe both models and provide examples of clinical errors that can occur in each. We then introduce the shared decision-making model as an alternative that is less prone to clinical errors. PsycINFO Database Record (c) 2016 APA, all rights reserved
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Application of Bayesian and cost benefit risk analysis in water resources management
NASA Astrophysics Data System (ADS)
Varouchakis, E. A.; Palogos, I.; Karatzas, G. P.
2016-03-01
Decision making is a significant tool in water resources management applications. This technical note approaches a decision dilemma that has not yet been considered for the water resources management of a watershed. A common cost-benefit analysis approach, which is novel in the risk analysis of hydrologic/hydraulic applications, and a Bayesian decision analysis are applied to aid the decision making on whether or not to construct a water reservoir for irrigation purposes. The alternative option examined is a scaled parabolic fine variation in terms of over-pumping violations in contrast to common practices that usually consider short-term fines. The methodological steps are analytically presented associated with originally developed code. Such an application, and in such detail, represents new feedback. The results indicate that the probability uncertainty is the driving issue that determines the optimal decision with each methodology, and depending on the unknown probability handling, each methodology may lead to a different optimal decision. Thus, the proposed tool can help decision makers to examine and compare different scenarios using two different approaches before making a decision considering the cost of a hydrologic/hydraulic project and the varied economic charges that water table limit violations can cause inside an audit interval. In contrast to practices that assess the effect of each proposed action separately considering only current knowledge of the examined issue, this tool aids decision making by considering prior information and the sampling distribution of future successful audits.
Overcoming Indecision by Changing the Decision Boundary
2017-01-01
The dominant theoretical framework for decision making asserts that people make decisions by integrating noisy evidence to a threshold. It has recently been shown that in many ecologically realistic situations, decreasing the decision boundary maximizes the reward available from decisions. However, empirical support for decreasing boundaries in humans is scant. To investigate this problem, we used an ideal observer model to identify the conditions under which participants should change their decision boundaries with time to maximize reward rate. We conducted 6 expanded-judgment experiments that precisely matched the assumptions of this theoretical model. In this paradigm, participants could sample noisy, binary evidence presented sequentially. Blocks of trials were fixed in duration, and each trial was an independent reward opportunity. Participants therefore had to trade off speed (getting as many rewards as possible) against accuracy (sampling more evidence). Having access to the actual evidence samples experienced by participants enabled us to infer the slope of the decision boundary. We found that participants indeed modulated the slope of the decision boundary in the direction predicted by the ideal observer model, although we also observed systematic deviations from optimality. Participants using suboptimal boundaries do so in a robust manner, so that any error in their boundary setting is relatively inexpensive. The use of a normative model provides insight into what variable(s) human decision makers are trying to optimize. Furthermore, this normative model allowed us to choose diagnostic experiments and in doing so we present clear evidence for time-varying boundaries. PMID:28406682
The use of decision analysis to examine ethical decision making by critical care nurses.
Hughes, K K; Dvorak, E M
1997-01-01
To examine the extent to which critical care staff nurses make ethical decisions that coincide with those recommended by a decision analytic model. Nonexperimental, ex post facto. Midwestern university-affiliated 500 bed tertiary care medical center. One hundred critical care staff nurses randomly selected from seven critical care units. Complete responses were obtained from 82 nurses (for a final response rate of 82%). The dependent variable--consistent decision making--was measured as staff nurses' abilities to make ethical decisions that coincided with those prescribed by the decision model. Subjects completed two instruments, the Ethical Decision Analytic Model, a computer-administered instrument designed to measure staff nurses' abilities to make consistent decisions about a chemically-impaired colleague; and a Background Inventory. The results indicate marked consensus among nurses when informal methods were used. However, there was little consistency between the nurses' informal decisions and those recommended by the decision analytic model. Although 50% (n = 41) of all nurses chose a course of action that coincided with the model's least optimal alternative, few nurses agreed with the model as to the most optimal course of action. The findings also suggest that consistency was unrelated (p > 0.05) to the nurses' educational background or years of clinical experience; that most subjects reported receiving little or no education in decision making during their basic nursing education programs; but that exposure to decision-making strategies was related to years of nursing experience (p < 0.05). The findings differ from related studies that have found a moderate degree of consistency between nurses and decision analytic models for strictly clinical decision tasks, especially when those tasks were less complex. However, the findings partially coincide with other findings that decision analysis may not be particularly well-suited to the critical care environment. Additional research is needed to determine whether critical care nurses use the same decision-making methods as do other nurses; and to clarify the effects of decision task (clinical versus ethical) on nurses' decision making. It should not be assumed that methods used to study nurses' clinical decision making are applicable for all nurses or all types of decisions, including ethical decisions.
OPTIMIZING USABILITY OF AN ECONOMIC DECISION SUPPORT TOOL: PROTOTYPE OF THE EQUIPT TOOL.
Cheung, Kei Long; Hiligsmann, Mickaël; Präger, Maximilian; Jones, Teresa; Józwiak-Hagymásy, Judit; Muñoz, Celia; Lester-George, Adam; Pokhrel, Subhash; López-Nicolás, Ángel; Trapero-Bertran, Marta; Evers, Silvia M A A; de Vries, Hein
2018-01-01
Economic decision-support tools can provide valuable information for tobacco control stakeholders, but their usability may impact the adoption of such tools. This study aims to illustrate a mixed-method usability evaluation of an economic decision-support tool for tobacco control, using the EQUIPT ROI tool prototype as a case study. A cross-sectional mixed methods design was used, including a heuristic evaluation, a thinking aloud approach, and a questionnaire testing and exploring the usability of the Return of Investment tool. A total of sixty-six users evaluated the tool (thinking aloud) and completed the questionnaire. For the heuristic evaluation, four experts evaluated the interface. In total twenty-one percent of the respondents perceived good usability. A total of 118 usability problems were identified, from which twenty-six problems were categorized as most severe, indicating high priority to fix them before implementation. Combining user-based and expert-based evaluation methods is recommended as these were shown to identify unique usability problems. The evaluation provides input to optimize usability of a decision-support tool, and may serve as a vantage point for other developers to conduct usability evaluations to refine similar tools before wide-scale implementation. Such studies could reduce implementation gaps by optimizing usability, enhancing in turn the research impact of such interventions.
Managing and learning with multiple models: Objectives and optimization algorithms
Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.
2011-01-01
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.
Making the Optimal Decision in Selecting Protective Clothing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, J. Mark
2008-01-15
Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered. This article discusses these factors as well as surveys of plants regarding their level of usage of single use protective clothing and should help individuals making decisions about protective clothing as it appliesmore » to their application. Individuals considering using SUPC should not jump to conclusions. The survey conducted clearly indicates that plants have different drivers. An evaluation should be performed to understand the facility's true drivers for selecting clothing. It is recommended that an interdisciplinary team be formed including representatives from budgets and cost, safety, radwaste, health physics, and key user groups to perform the analysis. The right questions need to be asked and answered by the company providing the clothing to formulate a proper perspective and conclusion. The conclusions and recommendations need to be shared with senior management so that the drivers, expected results, and associated costs are understood and endorsed. In the end, the individual making the recommendation should ask himself/herself: 'Is my decision emotional, or logical and economical?' 'Have I reached the optimal decision for my plant?'.« less
Structured decision making for managing pneumonia epizootics in bighorn sheep
Sells, Sarah N.; Mitchell, Michael S.; Edwards, Victoria L.; Gude, Justin A.; Anderson, Neil J.
2016-01-01
Good decision-making is essential to conserving wildlife populations. Although there may be multiple ways to address a problem, perfect solutions rarely exist. Managers are therefore tasked with identifying decisions that will best achieve desired outcomes. Structured decision making (SDM) is a method of decision analysis used to identify the most effective, efficient, and realistic decisions while accounting for values and priorities of the decision maker. The stepwise process includes identifying the management problem, defining objectives for solving the problem, developing alternative approaches to achieve the objectives, and formally evaluating which alternative is most likely to accomplish the objectives. The SDM process can be more effective than informal decision-making because it provides a transparent way to quantitatively evaluate decisions for addressing multiple management objectives while incorporating science, uncertainty, and risk tolerance. To illustrate the application of this process to a management need, we present an SDM-based decision tool developed to identify optimal decisions for proactively managing risk of pneumonia epizootics in bighorn sheep (Ovis canadensis) in Montana. Pneumonia epizootics are a major challenge for managers due to long-term impacts to herds, epistemic uncertainty in timing and location of future epizootics, and consequent difficulty knowing how or when to manage risk. The decision tool facilitates analysis of alternative decisions for how to manage herds based on predictions from a risk model, herd-specific objectives, and predicted costs and benefits of each alternative. Decision analyses for 2 example herds revealed that meeting management objectives necessitates specific approaches unique to each herd. The analyses showed how and under what circumstances the alternatives are optimal compared to other approaches and current management. Managers can be confident that these decisions are effective, efficient, and realistic because they explicitly account for important considerations managers implicitly weigh when making decisions, including competing management objectives, uncertainty in potential outcomes, and risk tolerance.
OPTIMIZING BMP PLACEMENT AT WATERSHED-SCALE USING SUSTAIN
Watershed and stormwater managers need modeling tools to evaluate alternative plans for environmental quality restoration and protection needs in urban and developing areas. A watershed-scale decision-support system, based on cost optimization, provides an essential tool to suppo...