Strain gage based determination of mixed mode SIFs
NASA Astrophysics Data System (ADS)
Murthy, K. S. R. K.; Sarangi, H.; Chakraborty, D.
2018-05-01
Accurate determination of mixed mode stress intensity factors (SIFs) is essential in understanding and analysis of mixed mode fracture of engineering components. Only a few strain gage determination of mixed mode SIFs are reported in literatures and those also do not provide any prescription for radial locations of strain gages to ensure accuracy of measurement. The present investigation experimentally demonstrates the efficacy of a proposed methodology for the accurate determination of mixed mode I/II SIFs using strain gages. The proposed approach is based on the modified Dally and Berger's mixed mode technique. Using the proposed methodology appropriate gage locations (optimal locations) for a given configuration have also been suggested ensuring accurate determination of mixed mode SIFs. Experiments have been conducted by locating the gages at optimal and non-optimal locations to study the efficacy of the proposed approach. The experimental results from the present investigation show that highly accurate SIFs (0.064%) can be determined using the proposed approach if the gages are located at the suggested optimal locations. On the other hand, results also show the very high errors (212.22%) in measured SIFs possible if the gages are located at non-optimal locations. The present work thus clearly substantiates the importance of knowing the optimal locations of the strain gages apriori in accurate determination of SIFs.
Surveillance versus Reconnaissance: An Entropy Based Model
2012-03-22
sensor detection since no new information is received. (Berry, Pontecorvo, & Fogg , Optimal Search, Location and Tracking of Surface Maritime Targets by...by Berry, Pontecorvo and Fogg (Berry, Pontecorvo, & Fogg , July, 2003) facilitates the optimal solutions to dynamically determining the allocation and...region (Berry, Pontecorvo, & Fogg , July, 2003). Phase II: Locate During the locate phase, the objective was to determine the location of the targets
System and method for bullet tracking and shooter localization
Roberts, Randy S [Livermore, CA; Breitfeller, Eric F [Dublin, CA
2011-06-21
A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.
Determination of a temperature sensor location for monitoring weld pool size in GMAW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boo, K.S.; Cho, H.S.
1994-11-01
This paper describes a method of determining the optimal sensor location to measure weldment surface temperature, which has a close correlation with weld pool size in the gas metal arc (GMA) welding process. Due to the inherent complexity and nonlinearity in the GMA welding process, the relationship between the weldment surface temperature and the weld pool size varies with the point of measurement. This necessitates an optimal selection of the measurement point to minimize the process nonlinearity effect in estimating the weld pool size from the measured temperature. To determine the optimal sensor location on the top surface of themore » weldment, the correlation between the measured temperature and the weld pool size is analyzed. The analysis is done by calculating the correlation function, which is based upon an analytical temperature distribution model. To validate the optimal sensor location, a series of GMA bead-on-plate welds are performed on a medium-carbon steel under various welding conditions. A comparison study is given in detail based upon the simulation and experimental results.« less
Determining Optimal College Locations
ERIC Educational Resources Information Center
Schofer, J. P.
1975-01-01
Location can be a critical determinant of the success of a college. Central Place Theory, as developed in geographic studies of population distribution patterns, can provide insights into the problem of evaluating college locations. In this way preferences of students can be balanced against economic, academic, and political considerations.…
Optimization of pressure gauge locations for water distribution systems using entropy theory.
Yoo, Do Guen; Chang, Dong Eil; Jun, Hwandon; Kim, Joong Hoon
2012-12-01
It is essential to select the optimal pressure gauge location for effective management and maintenance of water distribution systems. This study proposes an objective and quantified standard for selecting the optimal pressure gauge location by defining the pressure change at other nodes as a result of demand change at a specific node using entropy theory. Two cases are considered in terms of demand change: that in which demand at all nodes shows peak load by using a peak factor and that comprising the demand change of the normal distribution whose average is the base demand. The actual pressure change pattern is determined by using the emitter function of EPANET to reflect the pressure that changes practically at each node. The optimal pressure gauge location is determined by prioritizing the node that processes the largest amount of information it gives to (giving entropy) and receives from (receiving entropy) the whole system according to the entropy standard. The suggested model is applied to one virtual and one real pipe network, and the optimal pressure gauge location combination is calculated by implementing the sensitivity analysis based on the study results. These analysis results support the following two conclusions. Firstly, the installation priority of the pressure gauge in water distribution networks can be determined with a more objective standard through the entropy theory. Secondly, the model can be used as an efficient decision-making guide for gauge installation in water distribution systems.
Application of level set method to optimal vibration control of plate structures
NASA Astrophysics Data System (ADS)
Ansari, M.; Khajepour, A.; Esmailzadeh, E.
2013-02-01
Vibration control plays a crucial role in many structures, especially in the lightweight ones. One of the most commonly practiced method to suppress the undesirable vibration of structures is to attach patches of the constrained layer damping (CLD) onto the surface of the structure. In order to consider the weight efficiency of a structure, the best shapes and locations of the CLD patches should be determined to achieve the optimum vibration suppression with minimum usage of the CLD patches. This paper proposes a novel topology optimization technique that can determine the best shape and location of the applied CLD patches, simultaneously. Passive vibration control is formulated in the context of the level set method, which is a numerical technique to track shapes and locations concurrently. The optimal damping set could be found in a structure, in its fundamental vibration mode, such that the maximum modal loss factor of the system is achieved. Two different plate structures will be considered and the damping patches will be optimally located on them. At the same time, the best shapes of the damping patches will be determined too. In one example, the numerical results will be compared with those obtained from the experimental tests to validate the accuracy of the proposed method. This comparison reveals the effectiveness of the level set approach in finding the optimum shape and location of the CLD patches.
Gonnissen, J; De Backer, A; den Dekker, A J; Sijbers, J; Van Aert, S
2016-11-01
In the present paper, the optimal detector design is investigated for both detecting and locating light atoms from high resolution scanning transmission electron microscopy (HR STEM) images. The principles of detection theory are used to quantify the probability of error for the detection of light atoms from HR STEM images. To determine the optimal experiment design for locating light atoms, use is made of the so-called Cramér-Rao Lower Bound (CRLB). It is investigated if a single optimal design can be found for both the detection and location problem of light atoms. Furthermore, the incoming electron dose is optimised for both research goals and it is shown that picometre range precision is feasible for the estimation of the atom positions when using an appropriate incoming electron dose under the optimal detector settings to detect light atoms. Copyright © 2016 Elsevier B.V. All rights reserved.
Dieter, Cheryl A.; Fleck, William B.
2008-01-01
Potentiometric surfaces in the Piney Point-Nanjemoy, Aquia, and Upper Patapsco aquifers have declined from 1950 through 2000 throughout southern Maryland. In the vicinity of Lexington Park, Maryland, the potentiometric surface in the Aquia aquifer in 2000 was as much as 170 feet below sea level, approximately 150 feet lower than estimated pre-pumping levels before 1940. At the present rate, the water levels will have declined to the regulatory allowable maximum of 80 percent of available drawdown in the Aquia aquifer by about 2050. The effect of the withdrawals from these aquifers by the Naval Air Station Patuxent River and surrounding users on the declining potentiometric surface has raised concern for future availability of ground water. Growth at Naval Air Station Patuxent River may increase withdrawals, resulting in further drawdown. A ground-water-flow model, combined with optimization modeling, was used to develop withdrawal scenarios that minimize the effects (drawdown) of hypothetical future withdrawals. A three-dimensional finite-difference ground-water-flow model was developed to simulate the ground-water-flow system in the Piney Point-Nanjemoy, Aquia, and Upper Patapsco aquifers beneath the Naval Air Station Patuxent River. Transient and steady-state conditions were simulated to give water-resource managers additional tools to manage the ground-water resources. The transient simulation, representing 1900 through 2002, showed that the magnitude of withdrawal has increased over that time, causing ground-water flow to change direction in some areas. The steady-state simulation was linked to an optimization model to determine optimal solutions to hypothetical water-management scenarios. Two optimization scenarios were evaluated. The first scenario was designed to determine the optimal pumping rates for wells screened in the Aquia aquifer within three supply groups to meet a 25-percent increase in withdrawal demands, while minimizing the drawdown at a control location. The resulting optimal solution showed that pumping six wells above the rate required for maintenance produced the least amount of drawdown in the local potentiometric surface. The second hypothetical scenario was designed to determine the optimal location for an additional well in the Aquia aquifer in the northeastern part of the main air station. The additional well was needed to meet an increase in withdrawal of 43,000 cubic feet per day. The optimization model determined the optimal location for the new well, out of a possible 10 locations, while minimizing drawdown at control nodes located outside the western boundary of the main air station. The optimal location is about 1,500 feet to the east-northeast of the existing well.
Unsteady flow sensing and optimal sensor placement using machine learning
NASA Astrophysics Data System (ADS)
Semaan, Richard
2016-11-01
Machine learning is used to estimate the flow state and to determine the optimal sensor placement over a two-dimensional (2D) airfoil equipped with a Coanda actuator. The analysis is based on flow field data obtained from 2D unsteady Reynolds averaged Navier-Stokes (uRANS) simulations with different jet blowing intensities and actuation frequencies, characterizing different flow separation states. This study shows how the "random forests" algorithm is utilized beyond its typical usage in fluid mechanics estimating the flow state to determine the optimal sensor placement. The results are compared against the current de-facto standard of maximum modal amplitude location and against a brute force approach that scans all possible sensor combinations. The results show that it is possible to simultaneously infer the state of flow and to determine the optimal sensor location without the need to perform proper orthogonal decomposition. Collaborative Research Center (CRC) 880, DFG.
On singular cases in the design derivative of Green's functional
NASA Technical Reports Server (NTRS)
Reiss, Robert
1987-01-01
The author's prior development of a general abstract representation for the design sensitivities of Green's functional for linear structural systems is extended to the case where the structural stiffness vanishes at an internal location. This situation often occurs in the optimal design of structures. Most optimality criteria require that optimally designed beams be statically determinate. For clamped-pinned beams, for example, this is possible only if the flexural stiffness vanishes at some intermediate location. The Green's function for such structures depends upon the stiffness and the location where it vanishes. A precise representation for Green's function's sensitivity to the location of vanishing stiffness is presented for beams and axisymmetric plates.
Simulating changes to emergency care resources to compare system effectiveness.
Branas, Charles C; Wolff, Catherine S; Williams, Justin; Margolis, Gregg; Carr, Brendan G
2013-08-01
To apply systems optimization methods to simulate and compare the most effective locations for emergency care resources as measured by access to care. This study was an optimization analysis of the locations of trauma centers (TCs), helicopter depots (HDs), and severely injured patients in need of time-critical care in select US states. Access was defined as the percentage of injured patients who could reach a level I/II TC within 45 or 60 minutes. Optimal locations were determined by a search algorithm that considered all candidate sites within a set of existing hospitals and airports in finding the best solutions that maximized access. Across a dozen states, existing access to TCs within 60 minutes ranged from 31.1% to 95.6%, with a mean of 71.5%. Access increased from 0.8% to 35.0% after optimal addition of one or two TCs. Access increased from 1.0% to 15.3% after optimal addition of one or two HDs. Relocation of TCs and HDs (optimal removal followed by optimal addition) produced similar results. Optimal changes to TCs produced greater increases in access to care than optimal changes to HDs although these results varied across states. Systems optimization methods can be used to compare the impacts of different resource configurations and their possible effects on access to care. These methods to determine optimal resource allocation can be applied to many domains, including comparative effectiveness and patient-centered outcomes research. Copyright © 2013 Elsevier Inc. All rights reserved.
Optimising the location of antenatal classes.
Tomintz, Melanie N; Clarke, Graham P; Rigby, Janette E; Green, Josephine M
2013-01-01
To combine microsimulation and location-allocation techniques to determine antenatal class locations which minimise the distance travelled from home by potential users. Microsimulation modeling and location-allocation modeling. City of Leeds, UK. Potential users of antenatal classes. An individual-level microsimulation model was built to estimate the number of births for small areas by combining data from the UK Census 2001 and the Health Survey for England 2006. Using this model as a proxy for service demand, we then used a location-allocation model to optimize locations. Different scenarios show the advantage of combining these methods to optimize (re)locating antenatal classes and therefore reduce inequalities in accessing services for pregnant women. Use of these techniques should lead to better use of resources by allowing planners to identify optimal locations of antenatal classes which minimise women's travel. These results are especially important for health-care planners tasked with the difficult issue of targeting scarce resources in a cost-efficient, but also effective or accessible, manner. (169 words). Copyright © 2011 Elsevier Ltd. All rights reserved.
Design and optimization of a self-deploying PV tent array
NASA Astrophysics Data System (ADS)
Colozza, Anthony J.
A study was performed to design a self-deploying tent shaped PV (photovoltaic) array and optimize the design for maximum specific power. Each structural component of the design was analyzed to determine the size necessary to withstand the various forces it would be subjected to. Through this analysis the component weights were determined. An optimization was performed to determine the array dimensions and blanket geometry which produce the maximum specific power for a given PV blanket. This optimization was performed for both Lunar and Martian environmental conditions. The performance specifications for the array at both locations and with various PV blankets were determined.
NASA Astrophysics Data System (ADS)
Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji
2017-12-01
Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.
NASA Astrophysics Data System (ADS)
Wu, Shanhua; Yang, Zhongzhen
2018-07-01
This paper aims to optimize the locations of manufacturing industries in the context of economic globalization by proposing a bi-level programming model which integrates the location optimization model with the traffic assignment model. In the model, the transport network is divided into the subnetworks of raw materials and products respectively. The upper-level model is used to determine the location of industries and the OD matrices of raw materials and products. The lower-level model is used to calculate the attributes of traffic flow under given OD matrices. To solve the model, the genetic algorithm is designed. The proposed method is tested using the Chinese steel industry as an example. The result indicates that the proposed method could help the decision-makers to implement the location decisions for the manufacturing industries effectively.
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Kim, Nam-Hoon; Hwang, Jin Hwan; Cho, Jaegab; Kim, Jae Seong
2018-06-04
The characteristics of an estuary are determined by various factors as like as tide, wave, river discharge, etc. which also control the water quality of the estuary. Therefore, detecting the changes of characteristics is critical in managing the environmental qualities and pollution and so the locations of monitoring should be selected carefully. The present study proposes a framework to deploy the monitoring systems based on a graphical method of the spatial and temporal optimizations. With the well-validated numerical simulation results, the monitoring locations are determined to capture the changes of water qualities and pollutants depending on the variations of tide, current and freshwater discharge. The deployment strategy to find the appropriate monitoring locations is designed with the constrained optimization method, which finds solutions by constraining the objective function into the feasible regions. The objective and constrained functions are constructed with the interpolation technique such as objective analysis. Even with the smaller number of the monitoring locations, the present method performs well equivalently to the arbitrarily and evenly deployed monitoring system. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Raei, Ehsan; Nikoo, Mohammad Reza; Pourshahabi, Shokoufeh
2017-08-01
In the present study, a BIOPLUME III simulation model is coupled with a non-dominating sorting genetic algorithm (NSGA-II)-based model for optimal design of in situ groundwater bioremediation system, considering preferences of stakeholders. Ministry of Energy (MOE), Department of Environment (DOE), and National Disaster Management Organization (NDMO) are three stakeholders in the groundwater bioremediation problem in Iran. Based on the preferences of these stakeholders, the multi-objective optimization model tries to minimize: (1) cost; (2) sum of contaminant concentrations that violate standard; (3) contaminant plume fragmentation. The NSGA-II multi-objective optimization method gives Pareto-optimal solutions. A compromised solution is determined using fallback bargaining with impasse to achieve a consensus among the stakeholders. In this study, two different approaches are investigated and compared based on two different domains for locations of injection and extraction wells. At the first approach, a limited number of predefined locations is considered according to previous similar studies. At the second approach, all possible points in study area are investigated to find optimal locations, arrangement, and flow rate of injection and extraction wells. Involvement of the stakeholders, investigating all possible points instead of a limited number of locations for wells, and minimizing the contaminant plume fragmentation during bioremediation are new innovations in this research. Besides, the simulation period is divided into smaller time intervals for more efficient optimization. Image processing toolbox in MATLAB® software is utilized for calculation of the third objective function. In comparison with previous studies, cost is reduced using the proposed methodology. Dispersion of the contaminant plume is reduced in both presented approaches using the third objective function. Considering all possible points in the study area for determining the optimal locations of the wells in the second approach leads to more desirable results, i.e. decreasing the contaminant concentrations to a standard level and 20% to 40% cost reduction.
Optimal segmentation and packaging process
Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.
1999-01-01
A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.
Li, Shuangyan; Li, Xialian; Zhang, Dezhi; Zhou, Lingyun
2017-01-01
This study develops an optimization model to integrate facility location and inventory control for a three-level distribution network consisting of a supplier, multiple distribution centers (DCs), and multiple retailers. The integrated model addressed in this study simultaneously determines three types of decisions: (1) facility location (optimal number, location, and size of DCs); (2) allocation (assignment of suppliers to located DCs and retailers to located DCs, and corresponding optimal transport mode choices); and (3) inventory control decisions on order quantities, reorder points, and amount of safety stock at each retailer and opened DC. A mixed-integer programming model is presented, which considers the carbon emission taxes, multiple transport modes, stochastic demand, and replenishment lead time. The goal is to minimize the total cost, which covers the fixed costs of logistics facilities, inventory, transportation, and CO2 emission tax charges. The aforementioned optimal model was solved using commercial software LINGO 11. A numerical example is provided to illustrate the applications of the proposed model. The findings show that carbon emission taxes can significantly affect the supply chain structure, inventory level, and carbon emission reduction levels. The delay rate directly affects the replenishment decision of a retailer.
Yuan, Liming; Smith, Alex C
In this study, computational fluid dynamics (CFD) modeling was conducted to optimize gas sampling locations for the early detection of spontaneous heating in longwall gob areas. Initial simulations were carried out to predict carbon monoxide (CO) concentrations at various regulators in the gob using a bleeder ventilation system. Measured CO concentration values at these regulators were then used to calibrate the CFD model. The calibrated CFD model was used to simulate CO concentrations at eight sampling locations in the gob using a bleederless ventilation system to determine the optimal sampling locations for early detection of spontaneous combustion.
Deconvolution methods and systems for the mapping of acoustic sources from phased microphone arrays
NASA Technical Reports Server (NTRS)
Brooks, Thomas F. (Inventor); Humphreys, Jr., William M. (Inventor)
2010-01-01
A method and system for mapping acoustic sources determined from a phased microphone array. A plurality of microphones are arranged in an optimized grid pattern including a plurality of grid locations thereof. A linear configuration of N equations and N unknowns can be formed by accounting for a reciprocal influence of one or more beamforming characteristics thereof at varying grid locations among the plurality of grid locations. A full-rank equation derived from the linear configuration of N equations and N unknowns can then be iteratively determined. A full-rank can be attained by the solution requirement of the positivity constraint equivalent to the physical assumption of statically independent noise sources at each N location. An optimized noise source distribution is then generated over an identified aeroacoustic source region associated with the phased microphone array in order to compile an output presentation thereof, thereby removing the beamforming characteristics from the resulting output presentation.
Modelling optimal location for pre-hospital helicopter emergency medical services.
Schuurman, Nadine; Bell, Nathaniel J; L'Heureux, Randy; Hameed, Syed M
2009-05-09
Increasing the range and scope of early activation/auto launch helicopter emergency medical services (HEMS) may alleviate unnecessary injury mortality that disproportionately affects rural populations. To date, attempts to develop a quantitative framework for the optimal location of HEMS facilities have been absent. Our analysis used five years of critical care data from tertiary health care facilities, spatial data on origin of transport and accurate road travel time catchments for tertiary centres. A location optimization model was developed to identify where the expansion of HEMS would cover the greatest population among those currently underserved. The protocol was developed using geographic information systems (GIS) to measure populations, distances and accessibility to services. Our model determined Royal Inland Hospital (RIH) was the optimal site for an expanded HEMS - based on denominator population, distance to services and historical usage patterns. GIS based protocols for location of emergency medical resources can provide supportive evidence for allocation decisions - especially when resources are limited. In this study, we were able to demonstrate conclusively that a logical choice exists for location of additional HEMS. This protocol could be extended to location analysis for other emergency and health services.
NASA Astrophysics Data System (ADS)
Longting, M.; Ye, S.; Wu, J.
2014-12-01
Identification and removing the DNAPL source in aquifer system is vital in rendering remediation successful and lowering the remediation time and cost. Our work is to apply an optimal search strategy introduced by Zoi and Pinder[1], with some modifications, to a field site in Nanjing City, China to define the strength, and location of DNAPL sources using the least samples. The overall strategy uses Monte Carlo stochastic groundwater flow and transport modeling, incorporates existing sampling data into the search strategy, and determines optimal sampling locations that are selected according to the reduction in overall uncertainty of the field and the proximity to the source locations. After a sample is taken, the plume is updated using a Kalman filter. The updated plume is then compared to the concentration fields that emanate from each individual potential source using fuzzy set technique. The comparison followed provides weights that reflect the degree of truth regarding the location of the source. The above steps are repeated until the optimal source characteristics are determined. Considering our site case, some specific modifications and work have been done as follows. K random fields are generated after fitting the measurement K data to the variogram model. The locations of potential sources that are given initial weights are targeted based on the field survey, with multiple potential source locations around the workshops and wastewater basin. Considering the short history (1999-2010) of manufacturing optical brightener PF at the site, and the existing sampling data, a preliminary source strength is then estimated, which will be optimized by simplex method or GA later. The whole algorithm then will guide us for optimal sampling and update as the investigation proceeds, until the weights finally stabilized. Reference [1] Dokou Zoi, and George F. Pinder. "Optimal search strategy for the definition of a DNAPL source." Journal of Hydrology 376.3 (2009): 542-556. Acknowledgement: Funding supported by National Natural Science Foundation of China (No. 41030746, 40872155) and DuPont Company is appreciated.
A firefly algorithm for solving competitive location-design problem: a case study
NASA Astrophysics Data System (ADS)
Sadjadi, Seyed Jafar; Ashtiani, Milad Gorji; Ramezanian, Reza; Makui, Ahmad
2016-12-01
This paper aims at determining the optimal number of new facilities besides specifying both the optimal location and design level of them under the budget constraint in a competitive environment by a novel hybrid continuous and discrete firefly algorithm. A real-world application of locating new chain stores in the city of Tehran, Iran, is used and the results are analyzed. In addition, several examples have been solved to evaluate the efficiency of the proposed model and algorithm. The results demonstrate that the performed method provides good-quality results for the test problems.
NASA Astrophysics Data System (ADS)
Viswamurthy, S. R.; Ganguli, Ranjan
2007-03-01
This study aims to determine optimal locations of dual trailing-edge flaps to achieve minimum hub vibration levels in a helicopter, while incurring low penalty in terms of required trailing-edge flap control power. An aeroelastic analysis based on finite elements in space and time is used in conjunction with an optimal control algorithm to determine the flap time history for vibration minimization. The reduced hub vibration levels and required flap control power (due to flap motion) are the two objectives considered in this study and the flap locations along the blade are the design variables. It is found that second order polynomial response surfaces based on the central composite design of the theory of design of experiments describe both objectives adequately. Numerical studies for a four-bladed hingeless rotor show that both objectives are more sensitive to outboard flap location compared to the inboard flap location by an order of magnitude. Optimization results show a disjoint Pareto surface between the two objectives. Two interesting design points are obtained. The first design gives 77 percent vibration reduction from baseline conditions (no flap motion) with a 7 percent increase in flap power compared to the initial design. The second design yields 70 percent reduction in hub vibration with a 27 percent reduction in flap power from the initial design.
The study on the Layout of the Charging Station in Chengdu
NASA Astrophysics Data System (ADS)
Cai, yun; Zhang, wanquan; You, wei; Mao, pan
2018-03-01
In this paper, the comprehensive analysis of the factors affecting the layout of the electric car, considering the principle of layout of the charging station. Using queuing theory in operational research to establish mathematical model and basing on the principle of saving resource and convenient owner to optimize site number. Combining the theory of center to determine the service radius, Using the Gravity method to determine the initial location, Finally using the method of center of gravity to locate the charging station’s location.
Site Selection and Resource Allocation of Oil Spill Emergency Base for Offshore Oil Facilities
NASA Astrophysics Data System (ADS)
Li, Yunbin; Liu, Jingxian; Wei, Lei; Wu, Weihuang
2018-02-01
Based on the analysis of the historical data about oil spill accidents in the Bohai Sea, this paper discretizes oil spilled source into a limited number of spill points. According to the probability of oil spill risk, the demand for salvage forces at each oil spill point is evaluated. Aiming at the specific location of the rescue base around the Bohai Sea, a cost-benefit analysis is conducted to determine the total cost of disasters for each rescue base. Based on the relationship between the oil spill point and the rescue site, a multi-objective optimization location model for the oil spill rescue base in the Bohai Sea region is established. And the genetic algorithm is used to solve the optimization problem, and determine the emergency rescue base optimization program and emergency resources allocation ratio.
Li, Shuangyan; Li, Xialian; Zhang, Dezhi; Zhou, Lingyun
2017-01-01
This study develops an optimization model to integrate facility location and inventory control for a three-level distribution network consisting of a supplier, multiple distribution centers (DCs), and multiple retailers. The integrated model addressed in this study simultaneously determines three types of decisions: (1) facility location (optimal number, location, and size of DCs); (2) allocation (assignment of suppliers to located DCs and retailers to located DCs, and corresponding optimal transport mode choices); and (3) inventory control decisions on order quantities, reorder points, and amount of safety stock at each retailer and opened DC. A mixed-integer programming model is presented, which considers the carbon emission taxes, multiple transport modes, stochastic demand, and replenishment lead time. The goal is to minimize the total cost, which covers the fixed costs of logistics facilities, inventory, transportation, and CO2 emission tax charges. The aforementioned optimal model was solved using commercial software LINGO 11. A numerical example is provided to illustrate the applications of the proposed model. The findings show that carbon emission taxes can significantly affect the supply chain structure, inventory level, and carbon emission reduction levels. The delay rate directly affects the replenishment decision of a retailer. PMID:28103246
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology. PMID:27057557
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.
Optimal segmentation and packaging process
Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.
1999-08-10
A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.
Optimal experimental design for placement of boreholes
NASA Astrophysics Data System (ADS)
Padalkina, Kateryna; Bücker, H. Martin; Seidler, Ralf; Rath, Volker; Marquart, Gabriele; Niederau, Jan; Herty, Michael
2014-05-01
Drilling for deep resources is an expensive endeavor. Among the many problems finding the optimal drilling location for boreholes is one of the challenging questions. We contribute to this discussion by using a simulation based assessment of possible future borehole locations. We study the problem of finding a new borehole location in a given geothermal reservoir in terms of a numerical optimization problem. In a geothermal reservoir the temporal and spatial distribution of temperature and hydraulic pressure may be simulated using the coupled differential equations for heat transport and mass and momentum conservation for Darcy flow. Within this model the permeability and thermal conductivity are dependent on the geological layers present in the subsurface model of the reservoir. In general, those values involve some uncertainty making it difficult to predict actual heat source in the ground. Within optimal experimental the question is which location and to which depth to drill the borehole in order to estimate conductivity and permeability with minimal uncertainty. We introduce a measure for computing the uncertainty based on simulations of the coupled differential equations. The measure is based on the Fisher information matrix of temperature data obtained through the simulations. We assume that the temperature data is available within the full borehole. A minimization of the measure representing the uncertainty in the unknown permeability and conductivity parameters is performed to determine the optimal borehole location. We present the theoretical framework as well as numerical results for several 2d subsurface models including up to six geological layers. Also, the effect of unknown layers on the introduced measure is studied. Finally, to obtain a more realistic estimate of optimal borehole locations, we couple the optimization to a cost model for deep drilling problems.
Optimizing Classroom Acoustics Using Computer Model Studies.
ERIC Educational Resources Information Center
Reich, Rebecca; Bradley, John
1998-01-01
Investigates conditions relating to the maximum useful-to-detrimental sound ratios present in classrooms and determining the optimum conditions for speech intelligibility. Reveals that speech intelligibility is more strongly influenced by ambient noise levels and that the optimal location for sound absorbing material is on a classroom's upper…
NASA Astrophysics Data System (ADS)
Abdeh-Kolahchi, A.; Satish, M.; Datta, B.
2004-05-01
A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.
NASA Astrophysics Data System (ADS)
Mousavi, Monireh Sadat; Ashrafi, Khosro; Motlagh, Majid Shafie Pour; Niksokhan, Mohhamad Hosein; Vosoughifar, HamidReza
2018-02-01
In this study, coupled method for simulation of flow pattern based on computational methods for fluid dynamics with optimization technique using genetic algorithms is presented to determine the optimal location and number of sensors in an enclosed residential complex parking in Tehran. The main objective of this research is costs reduction and maximum coverage with regard to distribution of existing concentrations in different scenarios. In this study, considering all the different scenarios for simulation of pollution distribution using CFD simulations has been challenging due to extent of parking and number of cars available. To solve this problem, some scenarios have been selected based on random method. Then, maximum concentrations of scenarios are chosen for performing optimization. CFD simulation outputs are inserted as input in the optimization model using genetic algorithm. The obtained results stated optimal number and location of sensors.
NASA Astrophysics Data System (ADS)
Apribowo, Chico Hermanu Brillianto; Ibrahim, Muhammad Hamka; Wicaksono, F. X. Rian
2018-02-01
The growing burden of the load and the complexity of the power system has had an impact on the need for optimization of power system operation. Optimal power flow (OPF) with optimal location placement and rating of thyristor controlled series capacitor (TCSC) is an effective solution used to determine the economic cost of operating the plant and regulate the power flow in the power system. The purpose of this study is to minimize the total cost of generation by placing the location and the optimal rating of TCSC using genetic algorithm-design of experiment techniques (GA-DOE). Simulation on Java-Bali system 500 kV with the amount of TCSC used by 5 compensator, the proposed method can reduce the generation cost by 0.89% compared to OPF without using TCSC.
Trip optimization system and method for a train
Kumar, Ajith Kuttannair; Shaffer, Glenn Robert; Houpt, Paul Kenneth; Movsichoff, Bernardo Adrian; Chan, David So Keung
2017-08-15
A system for operating a train having one or more locomotive consists with each locomotive consist comprising one or more locomotives, the system including a locator element to determine a location of the train, a track characterization element to provide information about a track, a sensor for measuring an operating condition of the locomotive consist, a processor operable to receive information from the locator element, the track characterizing element, and the sensor, and an algorithm embodied within the processor having access to the information to create a trip plan that optimizes performance of the locomotive consist in accordance with one or more operational criteria for the train.
Determining on-fault earthquake magnitude distributions from integer programming
NASA Astrophysics Data System (ADS)
Geist, Eric L.; Parsons, Tom
2018-02-01
Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Mather, Barry A; Cho, Gyu-Jung
Capacitor banks have been generally installed and utilized to support distribution voltage during period of higher load or on longer, higher impedance, feeders. Installations of distributed energy resources in distribution systems are rapidly increasing, and many of these generation resources have variable and uncertain power output. These generators can significantly change the voltage profile across a feeder, and therefore when a new capacitor bank is needed analysis of optimal capacity and location of the capacitor bank is required. In this paper, we model a particular distribution system including essential equipment. An optimization method is adopted to determine the best capacitymore » and location sets of the newly installed capacitor banks, in the presence of distributed solar power generation. Finally we analyze the optimal capacitor banks configuration through the optimization and simulation results.« less
Deconvolution Methods and Systems for the Mapping of Acoustic Sources from Phased Microphone Arrays
NASA Technical Reports Server (NTRS)
Humphreys, Jr., William M. (Inventor); Brooks, Thomas F. (Inventor)
2012-01-01
Mapping coherent/incoherent acoustic sources as determined from a phased microphone array. A linear configuration of equations and unknowns are formed by accounting for a reciprocal influence of one or more cross-beamforming characteristics thereof at varying grid locations among the plurality of grid locations. An equation derived from the linear configuration of equations and unknowns can then be iteratively determined. The equation can be attained by the solution requirement of a constraint equivalent to the physical assumption that the coherent sources have only in phase coherence. The size of the problem may then be reduced using zoning methods. An optimized noise source distribution is then generated over an identified aeroacoustic source region associated with a phased microphone array (microphones arranged in an optimized grid pattern including a plurality of grid locations) in order to compile an output presentation thereof, thereby removing beamforming characteristics from the resulting output presentation.
Optical Sensor/Actuator Locations for Active Structural Acoustic Control
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Palumbo, Daniel L.; Kincaid, Rex K.
1998-01-01
Researchers at NASA Langley Research Center have extensive experience using active structural acoustic control (ASAC) for aircraft interior noise reduction. One aspect of ASAC involves the selection of optimum locations for microphone sensors and force actuators. This paper explains the importance of sensor/actuator selection, reviews optimization techniques, and summarizes experimental and numerical results. Three combinatorial optimization problems are described. Two involve the determination of the number and position of piezoelectric actuators, and the other involves the determination of the number and location of the sensors. For each case, a solution method is suggested, and typical results are examined. The first case, a simplified problem with simulated data, is used to illustrate the method. The second and third cases are more representative of the potential of the method and use measured data. The three case studies and laboratory test results establish the usefulness of the numerical methods.
Application of particle swarm optimization in path planning of mobile robot
NASA Astrophysics Data System (ADS)
Wang, Yong; Cai, Feng; Wang, Ying
2017-08-01
In order to realize the optimal path planning of mobile robot in unknown environment, a particle swarm optimization algorithm based on path length as fitness function is proposed. The location of the global optimal particle is determined by the minimum fitness value, and the robot moves along the points of the optimal particles to the target position. The process of moving to the target point is done with MATLAB R2014a. Compared with the standard particle swarm optimization algorithm, the simulation results show that this method can effectively avoid all obstacles and get the optimal path.
Longin, C Friedrich H; Utz, H Friedrich; Reif, Jochen C; Schipprack, Wolfgang; Melchinger, Albrecht E
2006-03-01
Optimum allocation of resources is of fundamental importance for the efficiency of breeding programs. The objectives of our study were to (1) determine the optimum allocation for the number of lines and test locations in hybrid maize breeding with doubled haploids (DHs) regarding two optimization criteria, the selection gain deltaG(k) and the probability P(k) of identifying superior genotypes, (2) compare both optimization criteria including their standard deviations (SDs), and (3) investigate the influence of production costs of DHs on the optimum allocation. For different budgets, number of finally selected lines, ratios of variance components, and production costs of DHs, the optimum allocation of test resources under one- and two-stage selection for testcross performance with a given tester was determined by using Monte Carlo simulations. In one-stage selection, lines are tested in field trials in a single year. In two-stage selection, optimum allocation of resources involves evaluation of (1) a large number of lines in a small number of test locations in the first year and (2) a small number of the selected superior lines in a large number of test locations in the second year, thereby maximizing both optimization criteria. Furthermore, to have a realistic chance of identifying a superior genotype, the probability P(k) of identifying superior genotypes should be greater than 75%. For budgets between 200 and 5,000 field plot equivalents, P(k) > 75% was reached only for genotypes belonging to the best 5% of the population. As the optimum allocation for P(k)(5%) was similar to that for deltaG(k), the choice of the optimization criterion was not crucial. The production costs of DHs had only a minor effect on the optimum number of locations and on values of the optimization criteria.
Determination of mixed mode (I/II) SIFs of cracked orthotropic materials
NASA Astrophysics Data System (ADS)
Chakraborty, D.; Chakraborty, Debaleena; Murthy, K. S. R. K.
2018-05-01
Strain gage techniques have been successfully but sparsely used for the determination of stress intensity factors (SIFs) of orthotropic materials. For mode I cases, few works have been reported on the strain gage based determination of mode I SIF of orthotropic materials. However, for mixed mode (I/II) cases, neither a theoretical development of a strain gage based technique nor any recommended guidelines for minimum number of strain gages and their locations were reported in the literature for determination of mixed mode SIFs. The authors for the first time came up with a theoretical proposition to successfully use strain gages for determination of mixed mode SIFs of orthotropic materials [1]. Based on these formulations, the present paper discusses a finite element (FE) based numerical simulation of the proposed strain gage technique employing [902/0]10S carbon-epoxy laminates with a slant edge crack. An FE based procedure has also been presented for determination of the optimal radial locations of the strain gages apriori to actual experiments. To substantiate the efficacy of the proposed technique, numerical simulations for strain gage based determination of mixed mode SIFs have been conducted. Results show that it is possible to accurately determine the mixed mode SIFs of orthotropic laminates when the strain gages are placed within the optimal radial locations estimated using the present formulation.
Constituents of Quality of Life and Urban Size
ERIC Educational Resources Information Center
Royuela, Vicente; Surinach, Jordi
2005-01-01
Do cities have an optimal size? In seeking to answer this question, various theories, including Optimal City Size Theory, the supply-oriented dynamic approach and the city network paradigm, have been forwarded that considered a city's population size as a determinant of location costs and benefits. However, the generalised growth in wealth that…
Vogel, Michael W; Vegh, Viktor; Reutens, David C
2013-05-01
This paper investigates optimal placement of a localized single-axis magnetometer for ultralow field (ULF) relaxometry in view of various sample shapes and sizes. The authors used finite element method for the numerical analysis to determine the sample magnetic field environment and evaluate the optimal location of the single-axis magnetometer. Given the different samples, the authors analysed the magnetic field distribution around the sample and determined the optimal orientation and possible positions of the sensor to maximize signal strength, that is, the power of the free induction decay. The authors demonstrate that a glass vial with flat bottom and 10 ml volume is the best structure to achieve the highest signal out of samples studied. This paper demonstrates the importance of taking into account the combined effects of sensor configuration and sample parameters for signal generation prior to designing and constructing ULF systems with a single-axis magnetometer. Through numerical simulations the authors were able to optimize structural parameters, such as sample shape and size, sensor orientation and location, to maximize the measured signal in ultralow field relaxometry.
Gwak, Jae Ha; Lee, Bo Kyeong; Lee, Won Kyung; Sohn, So Young
2017-03-15
This study proposes a new framework for the selection of optimal locations for green roofs to achieve a sustainable urban ecosystem. The proposed framework selects building sites that can maximize the benefits of green roofs, based not only on the socio-economic and environmental benefits to urban residents, but also on the provision of urban foraging sites for honeybees. The framework comprises three steps. First, building candidates for green roofs are selected considering the building type. Second, the selected building candidates are ranked in terms of their expected socio-economic and environmental effects. The benefits of green roofs are improved energy efficiency and air quality, reduction of urban flood risk and infrastructure improvement costs, reuse of storm water, and creation of space for education and leisure. Furthermore, the estimated cost of installing green roofs is also considered. We employ spatial data to determine the expected effects of green roofs on each building unit, because the benefits and costs may vary depending on the location of the building. This is due to the heterogeneous spatial conditions. In the third step, the final building sites are proposed by solving the maximal covering location problem (MCLP) to determine the optimal locations for green roofs as urban honeybee foraging sites. As an illustrative example, we apply the proposed framework in Seoul, Korea. This new framework is expected to contribute to sustainable urban ecosystems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Operations research applications in nuclear energy
NASA Astrophysics Data System (ADS)
Johnson, Benjamin Lloyd
This dissertation consists of three papers; the first is published in Annals of Operations Research, the second is nearing submission to INFORMS Journal on Computing, and the third is the predecessor of a paper nearing submission to Progress in Nuclear Energy. We apply operations research techniques to nuclear waste disposal and nuclear safeguards. Although these fields are different, they allow us to showcase some benefits of using operations research techniques to enhance nuclear energy applications. The first paper, "Optimizing High-Level Nuclear Waste Disposal within a Deep Geologic Repository," presents a mixed-integer programming model that determines where to place high-level nuclear waste packages in a deep geologic repository to minimize heat load concentration. We develop a heuristic that increases the size of solvable model instances. The second paper, "Optimally Configuring a Measurement System to Detect Diversions from a Nuclear Fuel Cycle," introduces a simulation-optimization algorithm and an integer-programming model to find the best, or near-best, resource-limited nuclear fuel cycle measurement system with a high degree of confidence. Given location-dependent measurement method precisions, we (i) optimize the configuration of n methods at n locations of a hypothetical nuclear fuel cycle facility, (ii) find the most important location at which to improve method precision, and (iii) determine the effect of measurement frequency on near-optimal configurations and objective values. Our results correspond to existing outcomes but we obtain them at least an order of magnitude faster. The third paper, "Optimizing Nuclear Material Control and Accountability Measurement Systems," extends the integer program from the second paper to locate measurement methods in a larger, hypothetical nuclear fuel cycle scenario given fixed purchase and utilization budgets. This paper also presents two mixed-integer quadratic programming models to increase the precision of existing methods given a fixed improvement budget and to reduce the measurement uncertainty in the system while limiting improvement costs. We quickly obtain similar or better solutions compared to several intuitive analyses that take much longer to perform.
Cho, Hakyung; Lee, Joo Hyeon
2015-09-01
Smart clothing is a sort of wearable device used for ubiquitous health monitoring. It provides comfort and efficiency in vital sign measurements and has been studied and developed in various types of monitoring platforms such as T-shirt and sports bra. However, despite these previous approaches, smart clothing for electrocardiography (ECG) monitoring has encountered a serious shortcoming relevant to motion artifacts caused by wearer movement. In effect, motion artifacts are one of the major problems in practical implementation of most wearable health-monitoring devices. In the ECG measurements collected by a garment, motion artifacts are usually caused by improper location of the electrode, leading to lack of contact between the electrode and skin with body motion. The aim of this study was to suggest a design for ECG-monitoring clothing contributing to reduction of motion artifacts. Based on the clothing science theory, it was assumed in this study that the stability of the electrode in a dynamic state differed depending on the electrode location in an ECG-monitoring garment. Founded on this assumption, effects of 56 electrode positions were determined by sectioning the surface of the garment into grids with 6 cm intervals in the front and back of the bodice. In order to determine the optimal locations of the ECG electrodes from the 56 positions, ECG measurements were collected from 10 participants at every electrode position in the garment while the wearer was in motion. The electrode locations indicating both an ECG measurement rate higher than 80.0 % and a large amplitude during motion were selected as the optimal electrode locations. The results of this analysis show four electrode locations with consistently higher ECG measurement rates and larger amplitudes amongst the 56 locations. These four locations were abstracted to be least affected by wearer movement in this research. Based on this result, a design of the garment-formed ECG monitoring platform reflecting the optimal positions of the electrode was suggested.
Liao, Zhipeng; Chen, Junning; Li, Wei; Darendeliler, M Ali; Swain, Michael; Li, Qing
2016-06-01
This paper aimed to precisely locate centres of resistance (CRe) of maxillary teeth and investigate optimal orthodontic force by identifying the effective zones of orthodontic tooth movement (OTM) from hydrostatic stress thresholds in the periodontal ligament (PDL). We applied distally-directed tipping and bodily forces ranging from 0.075 N to 3 N (7.5 g to 300 g) onto human maxillary teeth. The hydrostatic stress was quantified from nonlinear finite element analysis (FEA) and compared with normal capillary and systolic blood pressure for driving the tissue remodelling. Two biomechanical stimuli featuring localised and volume-averaged hydrostatic stresses were introduced to describe OTM. Locations of CRe were determined through iterative FEA simulation. Accurate locations of CRes of teeth and ranges of optimal orthodontic forces were obtained. By comparing with clinical results in literature, the volume average of hydrostatic stress in PDL was proved to describe the process of OTM more indicatively. The optimal orthodontic forces obtained from the in-silico modelling study echoed with the clinical results in vivo. A universal moment to force (M/F) ratio is not recommended due to the variation in patients and loading points. Accurate computational determination of CRe location can be applied in practice to facilitate orthodontic treatment. Global measurement of hydrostatic pressure in the PDL better characterised OTM, implying that OTM occurs only when the majority of PDL volume is critically stressed. The FEA results provide new insights into relevant orthodontic biomechanics and help establish optimal orthodontic force for a specific patient. Copyright © 2016 Elsevier Ltd. All rights reserved.
Determining on-fault earthquake magnitude distributions from integer programming
Geist, Eric L.; Parsons, Thomas E.
2018-01-01
Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.
Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements
NASA Technical Reports Server (NTRS)
Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.
1988-01-01
The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.
Alania, M; De Backer, A; Lobato, I; Krause, F F; Van Dyck, D; Rosenauer, A; Van Aert, S
2017-10-01
In this paper, we investigate how precise atoms of a small nanocluster can ultimately be located in three dimensions (3D) from a tilt series of images acquired using annular dark field (ADF) scanning transmission electron microscopy (STEM). Therefore, we derive an expression for the statistical precision with which the 3D atomic position coordinates can be estimated in a quantitative analysis. Evaluating this statistical precision as a function of the microscope settings also allows us to derive the optimal experimental design. In this manner, the optimal angular tilt range, required electron dose, optimal detector angles, and number of projection images can be determined. Copyright © 2016 Elsevier B.V. All rights reserved.
Li, Kaiming; Guo, Lei; Zhu, Dajiang; Hu, Xintao; Han, Junwei; Liu, Tianming
2013-01-01
Studying connectivities among functional brain regions and the functional dynamics on brain networks has drawn increasing interest. A fundamental issue that affects functional connectivity and dynamics studies is how to determine the best possible functional brain regions or ROIs (regions of interest) for a group of individuals, since the connectivity measurements are heavily dependent on ROI locations. Essentially, identification of accurate, reliable and consistent corresponding ROIs is challenging due to the unclear boundaries between brain regions, variability across individuals, and nonlinearity of the ROIs. In response to these challenges, this paper presents a novel methodology to computationally optimize ROIs locations derived from task-based fMRI data for individuals so that the optimized ROIs are more consistent, reproducible and predictable across brains. Our computational strategy is to formulate the individual ROI location optimization as a group variance minimization problem, in which group-wise consistencies in functional/structural connectivity patterns and anatomic profiles are defined as optimization constraints. Our experimental results from multimodal fMRI and DTI data show that the optimized ROIs have significantly improved consistency in structural and functional profiles across individuals. These improved functional ROIs with better consistency could contribute to further study of functional interaction and dynamics in the human brain. PMID:22281931
Design of dry-friction dampers for turbine blades
NASA Technical Reports Server (NTRS)
Ancona, W.; Dowell, E. H.
1983-01-01
A study is conducted of turbine blade forced response, where the blade has been modeled as a cantilever beam with a generally dry friction damper attached, and where the minimization of blade root strain as the excitation frequency is varied over a given range is the criterion for the evaluation of the effectiveness of the dry friction damper. Attempts are made to determine the location of the damper configuration best satisfying the design criterion, together with the best damping force (assuming that the damper location has been fixed). Results suggest that there need not be an optimal value for the damping force, or an optimal location for the dry friction damper, although there is a range of values which should be avoided.
Optimizing the location of ambulances in Tijuana, Mexico.
Dibene, Juan Carlos; Maldonado, Yazmin; Vera, Carlos; de Oliveira, Mauricio; Trujillo, Leonardo; Schütze, Oliver
2017-01-01
In this work we report on modeling the demand for Emergency Medical Services (EMS) in Tijuana, Baja California, Mexico, followed by the optimization of the location of the ambulances for the Red Cross of Tijuana (RCT), which is by far the largest provider of EMS services in the region. We used data from more than 10,000 emergency calls surveyed during the year 2013 to model and classify the demand for EMS in different scenarios that provide different perspectives on the demand throughout the city, considering such factors as the time of day, work and off-days. A modification of the Double Standard Model (DSM) is proposed and solved to determine a common robust solution to the ambulance location problem that simultaneously satisfies all specified constraints in all demand scenarios selecting from a set of almost 1000 possible base locations. The resulting optimization problems are solved using integer linear programming and the solutions are compared with the locations currently used by the Red Cross. Results show that demand coverage and response times can be substantially improved by relocating the current bases without the need for additional resources. Copyright © 2016 Elsevier Ltd. All rights reserved.
Evaluating scenarios of locations and capacities for vaccine storage in Nigeria.
Hirsh Bar Gai, Dor; Graybill, Zachary; Voevodsky, Paule; Shittu, Ekundayo
2018-06-07
Many developing countries still face the prevalence of preventable childhood diseases because their vaccine supply chain systems are inadequate by design or structure to meet the needs of their populations. Currently, Nigeria is evaluating options in the redesign of the country's vaccine supply chain. Using Nigeria as a case study, the objective is to evaluate different regional supply chain scenarios to identify the cost minimizing optimal hub locations and storage capacities for doses of different vaccines to achieve a 100% fill rate. First, we employ a shortest-path optimization routine to determine hub locations. Second, we develop a total cost minimizing routine based on stochastic optimization to determine the optimal capacities at the hubs. This model uses vaccine supply data between 2011 and 2014 provided by Nigeria's National Primary Health Care Development Agency (NPHCDA) on Tuberculosis, Polio, Yellow Fever, Tetanus Toxoid, and Hepatitis B. We find that a two-regional system with no central hub (NC2) cut costs by 23% to achieve a 100% fill rate when compared to optimizing the existing chain of six regions with a central hub (EC6). While the government's leading redesign alternative - no central three-hub system (Gov NC3) - reduces costs by 21% compared with the current EC6, it is more expensive than our NC2 system by 3%. In terms of capacity increases, optimizing the current system requires 42% more capacity than our NC2 system. Although the proposed Gov NC3 system requires the least increase in storage capacity, it requires the most distance to achieve a 100% coverage and about 15% more than our NC2. Overall, we find that improving the current system with a central hub and all its variants, even with optimal regional hub locations, require more storage capacities and are costlier than systems without a central hub. While this analysis prescribes the no central hub with two regions (NC2) as the least cost scenario, it is imperative to note that other configurations have benefits and comparative tradeoffs. Our approach and results offer some guidance for future vaccine supply chain redesigns in countries with similar layouts to Nigeria's. Copyright © 2018 Elsevier Ltd. All rights reserved.
Design Optimization of Vena Cava Filters: An application to dual filtration devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, M A; Wang, S L; Diachin, D P
Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped modelmore » thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.« less
Probabilistic determination of probe locations from distance data
Xu, Xiao-Ping; Slaughter, Brian D.; Volkmann, Niels
2013-01-01
Distance constraints, in principle, can be employed to determine information about the location of probes within a three-dimensional volume. Traditional methods for locating probes from distance constraints involve optimization of scoring functions that measure how well the probe location fits the distance data, exploring only a small subset of the scoring function landscape in the process. These methods are not guaranteed to find the global optimum and provide no means to relate the identified optimum to all other optima in scoring space. Here, we introduce a method for the location of probes from distance information that is based on probability calculus. This method allows exploration of the entire scoring space by directly combining probability functions representing the distance data and information about attachment sites. The approach is guaranteed to identify the global optimum and enables the derivation of confidence intervals for the probe location as well as statistical quantification of ambiguities. We apply the method to determine the location of a fluorescence probe using distances derived by FRET and show that the resulting location matches that independently derived by electron microscopy. PMID:23770585
NASA Astrophysics Data System (ADS)
Rofooei, Fayaz Rahimzadeh; Mohammadzadeh, Sahar
2016-03-01
The optimal distribution of fluid viscous dampers (FVD) in controlling the seismic response of eccentric, single-storey, moment resisting concrete structures is investigated using the previously defined center of damping constant (CDC). For this purpose, a number of structural models with different one-way stiffness and strength eccentricities are considered. Extensive nonlinear time history analyses are carried out for various arrangements of FVDs. It is shown that the arrangement of FVDs for controlling the torsional behavior due to asymmetry in the concrete structures is very dependent on the intensity of the peak ground acceleration (PGA) and the extent of the structural stiffness and strength eccentricities. The results indicate that, in the linear range of structural behavior the stiffness eccentricity es which is the main parameter in determining the location of optimal CDC, is found to be less or smaller than the optimal damping constant eccentricity e*d, i.e., |e*d| > |es|. But, in the nonlinear range of structural behavior where the strength eccentricity er is the dominant factor in determining the location of optimal CDC, |e*d| > |er|. It is also concluded that for the majority of the plan-asymmetric, concrete structures considered in this study with er ≠ 0, the optimal CDC approaches the center of mass as er decreases.
A review of distributed parameter groundwater management modeling methods
Gorelick, Steven M.
1983-01-01
Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.
A Review of Distributed Parameter Groundwater Management Modeling Methods
NASA Astrophysics Data System (ADS)
Gorelick, Steven M.
1983-04-01
Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.
2013-12-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.
Automated geometric optimization for robotic HIFU treatment of liver tumors.
Williamson, Tom; Everitt, Scott; Chauhan, Sunita
2018-05-01
High intensity focused ultrasound (HIFU) represents a non-invasive method for the destruction of cancerous tissue within the body. Heating of targeted tissue by focused ultrasound transducers results in the creation of ellipsoidal lesions at the target site, the locations of which can have a significant impact on treatment outcomes. Towards this end, this work describes a method for the optimization of lesion positions within arbitrary tumors, with specific anatomical constraints. A force-based optimization framework was extended to the case of arbitrary tumor position and constrained orientation. Analysis of the approximate reachable treatment volume for the specific case of treatment of liver tumors was performed based on four transducer configurations and constraint conditions derived. Evaluation was completed utilizing simplified spherical and ellipsoidal tumor models and randomly generated tumor volumes. The total volume treated, lesion overlap and healthy tissue ablated was evaluated. Two evaluation scenarios were defined and optimized treatment plans assessed. The optimization framework resulted in improvements of up to 10% in tumor volume treated, and reductions of up to 20% in healthy tissue ablated as compared to the standard lesion rastering approach. Generation of optimized plans proved feasible for both sub- and intercostally located tumors. This work describes an optimized method for the planning of lesion positions during HIFU treatment of liver tumors. The approach allows the determination of optimal lesion locations and orientations, and can be applied to arbitrary tumor shapes and sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimal experimental designs for the estimation of thermal properties of composite materials
NASA Technical Reports Server (NTRS)
Scott, Elaine P.; Moncman, Deborah A.
1994-01-01
Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scioletti, Michael S.; Newman, Alexandra M.; Goodman, Johanna K.
Renewable energy technologies, specifically, solar photovoltaic cells, combined with battery storage and diesel generators, form a hybrid system capable of independently powering remote locations, i.e., those isolated from larger grids. If sized correctly, hybrid systems reduce fuel consumption compared to diesel generator-only alternatives. We present an optimization model for establishing a hybrid power design and dispatch strategy for remote locations, such as a military forward operating base, that models the acquisition of different power technologies as integer variables and their operation using nonlinear expressions. Our cost-minimizing, nonconvex, mixed-integer, nonlinear program contains a detailed battery model. Due to its complexities, wemore » present linearizations, which include exact and convex under-estimation techniques, and a heuristic, which determines an initial feasible solution to serve as a “warm start” for the solver. We determine, in a few hours at most, solutions within 5% of optimality for a candidate set of technologies; these solutions closely resemble those from the nonlinear model. Lastly, our instances contain real data spanning a yearly horizon at hour fidelity and demonstrate that a hybrid system could reduce fuel consumption by as much as 50% compared to a generator-only solution.« less
Scioletti, Michael S.; Newman, Alexandra M.; Goodman, Johanna K.; ...
2017-05-08
Renewable energy technologies, specifically, solar photovoltaic cells, combined with battery storage and diesel generators, form a hybrid system capable of independently powering remote locations, i.e., those isolated from larger grids. If sized correctly, hybrid systems reduce fuel consumption compared to diesel generator-only alternatives. We present an optimization model for establishing a hybrid power design and dispatch strategy for remote locations, such as a military forward operating base, that models the acquisition of different power technologies as integer variables and their operation using nonlinear expressions. Our cost-minimizing, nonconvex, mixed-integer, nonlinear program contains a detailed battery model. Due to its complexities, wemore » present linearizations, which include exact and convex under-estimation techniques, and a heuristic, which determines an initial feasible solution to serve as a “warm start” for the solver. We determine, in a few hours at most, solutions within 5% of optimality for a candidate set of technologies; these solutions closely resemble those from the nonlinear model. Lastly, our instances contain real data spanning a yearly horizon at hour fidelity and demonstrate that a hybrid system could reduce fuel consumption by as much as 50% compared to a generator-only solution.« less
Optimal Real-time Dispatch for Integrated Energy Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Firestone, Ryan Michael
This report describes the development and application of a dispatch optimization algorithm for integrated energy systems (IES) comprised of on-site cogeneration of heat and electricity, energy storage devices, and demand response opportunities. This work is intended to aid commercial and industrial sites in making use of modern computing power and optimization algorithms to make informed, near-optimal decisions under significant uncertainty and complex objective functions. The optimization algorithm uses a finite set of randomly generated future scenarios to approximate the true, stochastic future; constraints are included that prevent solutions to this approximate problem from deviating from solutions to the actual problem.more » The algorithm is then expressed as a mixed integer linear program, to which a powerful commercial solver is applied. A case study of United States Postal Service Processing and Distribution Centers (P&DC) in four cities and under three different electricity tariff structures is conducted to (1) determine the added value of optimal control to a cogeneration system over current, heuristic control strategies; (2) determine the value of limited electric load curtailment opportunities, with and without cogeneration; and (3) determine the trade-off between least-cost and least-carbon operations of a cogeneration system. Key results for the P&DC sites studied include (1) in locations where the average electricity and natural gas prices suggest a marginally profitable cogeneration system, optimal control can add up to 67% to the value of the cogeneration system; optimal control adds less value in locations where cogeneration is more clearly profitable; (2) optimal control under real-time pricing is (a) more complicated than under typical time-of-use tariffs and (b) at times necessary to make cogeneration economic at all; (3) limited electric load curtailment opportunities can be more valuable as a compliment to the cogeneration system than alone; and (4) most of the trade-off between least-cost and least-carbon IES is determined during the system design stage; for the IES system considered, there is little difference between least-cost control and least-carbon control.« less
Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F
2018-01-01
Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.
NASA Astrophysics Data System (ADS)
Zhang, Yongqin; Iman, Kory
2018-05-01
Fuel-based transportation is one of the major contributors to poor air quality in the United States. Electric Vehicle (EV) is potentially the cleanest transportation technology to our environment. This research developed a spatial suitability model to identify optimal geographic locations for installing EV charging stations for travelling public. The model takes into account a variety of positive and negative factors to identify prime locations for installing EV charging stations in Wasatch Front, Utah, where automobile emission causes severe air pollution due to atmospheric inversion condition near the valley floor. A walkable factor grid was created to store index scores from input factor layers to determine prime locations. 27 input factors including land use, demographics, employment centers etc. were analyzed. Each factor layer was analyzed to produce a summary statistic table to determine the site suitability. Potential locations that exhibit high EV charging usage were identified and scored. A hot spot map was created to demonstrate high, moderate, and low suitability areas for installing EV charging stations. A spatially well distributed EV charging system was then developed, aiming to reduce "range anxiety" from traveling public. This spatial methodology addresses the complex problem of locating and establishing a robust EV charging station infrastructure for decision makers to build a clean transportation infrastructure, and eventually improve environment pollution.
Optimization of multi-element airfoils for maximum lift
NASA Technical Reports Server (NTRS)
Olsen, L. E.
1979-01-01
Two theoretical methods are presented for optimizing multi-element airfoils to obtain maximum lift. The analyses assume that the shapes of the various high lift elements are fixed. The objective of the design procedures is then to determine the optimum location and/or deflection of the leading and trailing edge devices. The first analysis determines the optimum horizontal and vertical location and the deflection of a leading edge slat. The structure of the flow field is calculated by iteratively coupling potential flow and boundary layer analysis. This design procedure does not require that flow separation effects be modeled. The second analysis determines the slat and flap deflection required to maximize the lift of a three element airfoil. This approach requires that the effects of flow separation from one or more of the airfoil elements be taken into account. The theoretical results are in good agreement with results of a wind tunnel test used to corroborate the predicted optimum slat and flap positions.
Filgueira, Ramon; Grant, Jon; Strand, Øivind
2014-06-01
Shellfish carrying capacity is determined by the interaction of a cultured species with its ecosystem, which is strongly influenced by hydrodynamics. Water circulation controls the exchange of matter between farms and the adjacent areas, which in turn establishes the nutrient supply that supports phytoplankton populations. The complexity of water circulation makes necessary the use of hydrodynamic models with detailed spatial resolution in carrying capacity estimations. This detailed spatial resolution also allows for the study of processes that depend on specific spatial arrangements, e.g., the most suitable location to place farms, which is crucial for marine spatial planning, and consequently for decision support systems. In the present study, a fully spatial physical-biogeochemical model has been combined with scenario building and optimization techniques as a proof of concept of the use of ecosystem modeling as an objective tool to inform marine spatial planning. The object of this exercise was to generate objective knowledge based on an ecosystem approach to establish new mussel aquaculture areas in a Norwegian fjord. Scenario building was used to determine the best location of a pump that can be used to bring nutrient-rich deep waters to the euphotic layer, increasing primary production, and consequently, carrying capacity for mussel cultivation. In addition, an optimization tool, parameter estimation (PEST), was applied to the optimal location and mussel standing stock biomass that maximize production, according to a preestablished carrying capacity criterion. Optimization tools allow us to make rational and transparent decisions to solve a well-defined question, decisions that are essential for policy makers. The outcomes of combining ecosystem models with scenario building and optimization facilitate planning based on an ecosystem approach, highlighting the capabilities of ecosystem modeling as a tool for marine spatial planning.
NASA Astrophysics Data System (ADS)
Tsao, Yu-Chung
2016-02-01
This study models a joint location, inventory and preservation decision-making problem for non-instantaneous deteriorating items under delay in payments. An outside supplier provides a credit period to the wholesaler which has a distribution system with distribution centres (DCs). The non-instantaneous deteriorating means no deterioration occurs in the earlier stage, which is very useful for items such as fresh food and fruits. This paper also considers that the deteriorating rate will decrease and the reservation cost will increase as the preservation effort increases. Therefore, how much preservation effort should be made is a crucial decision. The objective of this paper is to determine the optimal locations and number of DCs, the optimal replenishment cycle time at DCs, and the optimal preservation effort simultaneously such that the total network profit is maximised. The problem is formulated as piecewise nonlinear functions and has three different cases. Algorithms based on piecewise nonlinear optimisation are provided to solve the joint location and inventory problem for all cases. Computational analysis illustrates the solution procedures and the impacts of the related parameters on decisions and profits. The results of this study can serve as references for business managers or administrators.
Robust Flight Path Determination for Mars Precision Landing Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kohen, Hamid
1997-01-01
This paper documents the application of genetic algorithms (GAs) to the problem of robust flight path determination for Mars precision landing. The robust flight path problem is defined here as the determination of the flight path which delivers a low-lift open-loop controlled vehicle to its desired final landing location while minimizing the effect of perturbations due to uncertainty in the atmospheric model and entry conditions. The genetic algorithm was capable of finding solutions which reduced the landing error from 111 km RMS radial (open-loop optimal) to 43 km RMS radial (optimized with respect to perturbations) using 200 hours of computation on an Ultra-SPARC workstation. Further reduction in the landing error is possible by going to closed-loop control which can utilize the GA optimized paths as nominal trajectories for linearization.
Determination system for solar cell layout in traffic light network using dominating set
NASA Astrophysics Data System (ADS)
Eka Yulia Retnani, Windi; Fambudi, Brelyanes Z.; Slamin
2018-04-01
Graph Theory is one of the fields in Mathematics that solves discrete problems. In daily life, the applications of Graph Theory are used to solve various problems. One of the topics in the Graph Theory that is used to solve the problem is the dominating set. The concept of dominating set is used, for example, to locate some objects systematically. In this study, the dominating set are used to determine the dominating points for solar panels, where the vertex represents the traffic light point and the edge represents the connection between the points of the traffic light. To search the dominating points for solar panels using the greedy algorithm. This algorithm is used to determine the location of solar panel. This research produced applications that can determine the location of solar panels with optimal results, that is, the minimum dominating points.
Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids
NASA Technical Reports Server (NTRS)
Pinson, Robin M.; Lu, Ping
2016-01-01
Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from the ground control. The propellant optimal control problem in this work is to determine the optimal finite thrust vector to land the spacecraft at a specified location, in the presence of a highly nonlinear gravity field, subject to various mission and operational constraints. The proposed solution uses convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process for a fixed final time problem. In addition, a second optimization method is wrapped around the convex optimization problem to determine the optimal flight time that yields the lowest propellant usage over all flight times. Gravity models designed for irregularly shaped asteroids are investigated. Success of the algorithm is demonstrated by designing powered descent trajectories for the elongated binary asteroid Castalia.
Schiuma, D; Brianza, S; Tami, A E
2011-03-01
A method was developed to improve the design of locking implants by finding the optimal paths for the anchoring elements, based on a high resolution pQCT assessment of local bone mineral density (BMD) distribution and bone micro-architecture (BMA). The method consists of three steps: (1) partial fixation of the implant to the bone and creation of a reference system, (2) implant removal and pQCT scan of the bone, and (3) determination of BMD and BMA of all implant-anchoring locations along the actual and alternative directions. Using a PHILOS plate, the method uncertainty was tested on an artificial humerus bone model. A cadaveric humerus was used to quantify how the uncertainty of the method affects the assessment of bone parameters. BMD and BMA were determined along four possible alternative screw paths as possible criteria for implant optimization. The method is biased by a 0.87 ± 0.12 mm systematic uncertainty and by a 0.44 ± 0.09 mm random uncertainty in locating the virtual screw position. This study shows that this method can be used to find alternative directions for the anchoring elements, which may possess better bone properties. This modification will thus produce an optimized implant design. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.
KAFFASH-CHARANDABI, Neda; SADEGHI-NIARAKI, Abolghasem; PARK, Dong-Kyun
2015-01-01
Background: Cardiac arrest is a condition in which the heart is completely stopped and is not pumping any blood. Although most cardiac arrest cases are reported from homes or hospitals, about 20% occur in public areas. Therefore, these areas need to be investigated in terms of cardiac arrest incidence so that places of high incidence can be identified and cardiac rehabilitation defibrillators installed there. Methods: In order to investigate a study area in Petersburg, Pennsylvania State, and to determine appropriate places for installing defibrillators with 5-year period data, swarm intelligence algorithms were used. Moreover, the location of the defibrillators was determined based on the following five evaluation criteria: land use, altitude of the area, economic conditions, distance from hospitals and approximate areas of reported cases of cardiac arrest for public places that were created in geospatial information system (GIS). Results: The A-P HADEL algorithm results were more precise about 27.36%. The validation results indicated a wider coverage of real values and the verification results confirmed the faster and more exact optimization of the cost function in the PSO method. Conclusion: The study findings emphasize the necessity of applying optimal optimization methods along with GIS and precise selection of criteria in the selection of optimal locations for installing medical facilities because the selected algorithm and criteria dramatically affect the final responses. Meanwhile, providing land suitability maps for installing facilities across hot and risky spots has the potential to save many lives. PMID:26587471
Multiple indicator cokriging with application to optimal sampling for environmental monitoring
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, Eulogio; Dowd, Peter A.
2005-02-01
A probabilistic solution to the problem of spatial interpolation of a variable at an unsampled location consists of estimating the local cumulative distribution function (cdf) of the variable at that location from values measured at neighbouring locations. As this distribution is conditional to the data available at neighbouring locations it incorporates the uncertainty of the value of the variable at the unsampled location. Geostatistics provides a non-parametric solution to such problems via the various forms of indicator kriging. In a least squares sense indicator cokriging is theoretically the best estimator but in practice its use has been inhibited by problems such as an increased number of violations of order relations constraints when compared with simpler forms of indicator kriging. In this paper, we describe a methodology and an accompanying computer program for estimating a vector of indicators by simple indicator cokriging, i.e. simultaneous estimation of the cdf for K different thresholds, {F(u,zk),k=1,…,K}, by solving a unique cokriging system for each location at which an estimate is required. This approach produces a variance-covariance matrix of the estimated vector of indicators which is used to fit a model to the estimated local cdf by logistic regression. This model is used to correct any violations of order relations and automatically ensures that all order relations are satisfied, i.e. the estimated cumulative distribution function, F^(u,zk), is such that: F^(u,zk)∈[0,1],∀zk,andF^(u,zk)⩽F^(u,z)forzk
Wireless Sensor Networks - Node Localization for Various Industry Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derr, Kurt; Manic, Milos
Fast, effective monitoring following airborne releases of toxic substances is critical to mitigate risks to threatened population areas. Wireless sensor nodes at fixed predetermined locations may monitor such airborne releases and provide early warnings to the public. A challenging algorithmic problem is determining the locations to place these sensor nodes while meeting several criteria: 1) provide complete coverage of the domain, and 2) create a topology with problem dependent node densities, while 3) minimizing the number of sensor nodes. This manuscript presents a novel approach to determining optimal sensor placement, Advancing Front mEsh generation with Constrained dElaunay Triangulation and Smoothingmore » (AFECETS) that addresses these criteria. A unique aspect of AFECETS is the ability to determine wireless sensor node locations for areas of high interest (hospitals, schools, high population density areas) that require higher density of nodes for monitoring environmental conditions, a feature that is difficult to find in other research work. The AFECETS algorithm was tested on several arbitrary shaped domains. AFECETS simulation results show that the algorithm 1) provides significant reduction in the number of nodes, in some cases over 40%, compared to an advancing front mesh generation algorithm, 2) maintains and improves optimal spacing between nodes, and 3) produces simulation run times suitable for real-time applications.« less
Wireless Sensor Networks - Node Localization for Various Industry Problems
Derr, Kurt; Manic, Milos
2015-06-01
Fast, effective monitoring following airborne releases of toxic substances is critical to mitigate risks to threatened population areas. Wireless sensor nodes at fixed predetermined locations may monitor such airborne releases and provide early warnings to the public. A challenging algorithmic problem is determining the locations to place these sensor nodes while meeting several criteria: 1) provide complete coverage of the domain, and 2) create a topology with problem dependent node densities, while 3) minimizing the number of sensor nodes. This manuscript presents a novel approach to determining optimal sensor placement, Advancing Front mEsh generation with Constrained dElaunay Triangulation and Smoothingmore » (AFECETS) that addresses these criteria. A unique aspect of AFECETS is the ability to determine wireless sensor node locations for areas of high interest (hospitals, schools, high population density areas) that require higher density of nodes for monitoring environmental conditions, a feature that is difficult to find in other research work. The AFECETS algorithm was tested on several arbitrary shaped domains. AFECETS simulation results show that the algorithm 1) provides significant reduction in the number of nodes, in some cases over 40%, compared to an advancing front mesh generation algorithm, 2) maintains and improves optimal spacing between nodes, and 3) produces simulation run times suitable for real-time applications.« less
Method of automatic measurement and focus of an electron beam and apparatus therefore
Giedt, W.H.; Campiotti, R.
1996-01-09
An electron beam focusing system, including a plural slit-type Faraday beam trap, for measuring the diameter of an electron beam and automatically focusing the beam for welding is disclosed. Beam size is determined from profiles of the current measured as the beam is swept over at least two narrow slits of the beam trap. An automated procedure changes the focus coil current until the focal point location is just below a workpiece surface. A parabolic equation is fitted to the calculated beam sizes from which optimal focus coil current and optimal beam diameter are determined. 12 figs.
Method of automatic measurement and focus of an electron beam and apparatus therefor
Giedt, Warren H.; Campiotti, Richard
1996-01-01
An electron beam focusing system, including a plural slit-type Faraday beam trap, for measuring the diameter of an electron beam and automatically focusing the beam for welding. Beam size is determined from profiles of the current measured as the beam is swept over at least two narrow slits of the beam trap. An automated procedure changes the focus coil current until the focal point location is just below a workpiece surface. A parabolic equation is fitted to the calculated beam sizes from which optimal focus coil current and optimal beam diameter are determined.
Moving target tracking through distributed clustering in directional sensor networks.
Enayet, Asma; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Almogren, Ahmad; Alamri, Atif
2014-12-18
The problem of moving target tracking in directional sensor networks (DSNs) introduces new research challenges, including optimal selection of sensing and communication sectors of the directional sensor nodes, determination of the precise location of the target and an energy-efficient data collection mechanism. Existing solutions allow individual sensor nodes to detect the target's location through collaboration among neighboring nodes, where most of the sensors are activated and communicate with the sink. Therefore, they incur much overhead, loss of energy and reduced target tracking accuracy. In this paper, we have proposed a clustering algorithm, where distributed cluster heads coordinate their member nodes in optimizing the active sensing and communication directions of the nodes, precisely determining the target location by aggregating reported sensing data from multiple nodes and transferring the resultant location information to the sink. Thus, the proposed target tracking mechanism minimizes the sensing redundancy and maximizes the number of sleeping nodes in the network. We have also investigated the dynamic approach of activating sleeping nodes on-demand so that the moving target tracking accuracy can be enhanced while maximizing the network lifetime. We have carried out our extensive simulations in ns-3, and the results show that the proposed mechanism achieves higher performance compared to the state-of-the-art works.
Moving Target Tracking through Distributed Clustering in Directional Sensor Networks
Enayet, Asma; Razzaque, Md. Abdur; Hassan, Mohammad Mehedi; Almogren, Ahmad; Alamri, Atif
2014-01-01
The problem of moving target tracking in directional sensor networks (DSNs) introduces new research challenges, including optimal selection of sensing and communication sectors of the directional sensor nodes, determination of the precise location of the target and an energy-efficient data collection mechanism. Existing solutions allow individual sensor nodes to detect the target's location through collaboration among neighboring nodes, where most of the sensors are activated and communicate with the sink. Therefore, they incur much overhead, loss of energy and reduced target tracking accuracy. In this paper, we have proposed a clustering algorithm, where distributed cluster heads coordinate their member nodes in optimizing the active sensing and communication directions of the nodes, precisely determining the target location by aggregating reported sensing data from multiple nodes and transferring the resultant location information to the sink. Thus, the proposed target tracking mechanism minimizes the sensing redundancy and maximizes the number of sleeping nodes in the network. We have also investigated the dynamic approach of activating sleeping nodes on-demand so that the moving target tracking accuracy can be enhanced while maximizing the network lifetime. We have carried out our extensive simulations in ns-3, and the results show that the proposed mechanism achieves higher performance compared to the state-of-the-art works. PMID:25529205
Huang, Chung-Yuan; Wen, Tzai-Hung
2014-01-01
Immediate treatment with an automated external defibrillator (AED) increases out-of-hospital cardiac arrest (OHCA) patient survival potential. While considerable attention has been given to determining optimal public AED locations, spatial and temporal factors such as time of day and distance from emergency medical services (EMSs) are understudied. Here we describe a geocomputational genetic algorithm with a new stirring operator (GANSO) that considers spatial and temporal cardiac arrest occurrence factors when assessing the feasibility of using Taipei 7-Eleven stores as installation locations for AEDs. Our model is based on two AED conveyance modes, walking/running and driving, involving service distances of 100 and 300 meters, respectively. Our results suggest different AED allocation strategies involving convenience stores in urban settings. In commercial areas, such installations can compensate for temporal gaps in EMS locations when responding to nighttime OHCA incidents. In residential areas, store installations can compensate for long distances from fire stations, where AEDs are currently held in Taipei.
Liang, Lijun; Hu, Yao; Liu, Hao; Li, Xiaojiu; Li, Jin; He, Yin
2017-04-01
In order to reduce the mortality rate of cardiovascular disease patients effectively, improve the electrocardiogram (ECG) accuracy of signal acquisition, and reduce the influence of motion artifacts caused by the electrodes in inappropriate location in the clothing for ECG measurement, we in this article present a research on the optimum place of ECG electrodes in male clothing using three-lead monitoring methods. In the 3-lead ECG monitoring clothing for men we selected test points. Comparing the ECG and power spectrum analysis of the acquired ECG signal quality of each group of points, we determined the best location of ECG electrodes in the male monitoring clothing. The electrode motion artifacts caused by improper location had been significantly improved when electrodes were put in the best position of the clothing for men. The position of electrodes is crucial for ECG monitoring clothing. The stability of the acquired ECG signal could be improved significantly when electrodes are put at optimal locations.
Finding the optimal lengths for three branches at a junction.
Woldenberg, M J; Horsfield, K
1983-09-21
This paper presents an exact analytical solution to the problem of locating the junction point between three branches so that the sum of the total costs of the branches is minimized. When the cost per unit length of each branch is known the angles between each pair of branches can be deduced following reasoning first introduced to biology by Murray. Assuming the outer ends of each branch are fixed, the location of the junction and the length of each branch are then deduced using plane geometry and trigonometry. The model has applications in determining the optimal cost of a branch or branches at a junction. Comparing the optimal to the actual cost of a junction is a new way to compare cost models for goodness of fit to actual junction geometry. It is an unambiguous measure and is superior to comparing observed and optimal angles between each daughter and the parent branch. We present data for 199 junctions in the pulmonary arteries of two human lungs. For the branches at each junction we calculated the best fitting value of x from the relationship that flow alpha (radius)x. We found that the value of x determined whether a junction was best fitted by a surface, volume, drag or power minimization model. While economy of explanation casts doubt that four models operate simultaneously, we found that optimality may still operate, since the angle to the major daughter is less than the angle to the minor daughter. Perhaps optimality combined with a space filling branching pattern governs the branching geometry of the pulmonary artery.
A trust region-based approach to optimize triple response systems
NASA Astrophysics Data System (ADS)
Fan, Shu-Kai S.; Fan, Chihhao; Huang, Chia-Fen
2014-05-01
This article presents a new computing procedure for the global optimization of the triple response system (TRS) where the response functions are non-convex quadratics and the input factors satisfy a radial constrained region of interest. The TRS arising from response surface modelling can be approximated using a nonlinear mathematical program that considers one primary objective function and two secondary constraint functions. An optimization algorithm named the triple response surface algorithm (TRSALG) is proposed to determine the global optimum for the non-degenerate TRS. In TRSALG, the Lagrange multipliers of the secondary functions are determined using the Hooke-Jeeves search method and the Lagrange multiplier of the radial constraint is located using the trust region method within the global optimality space. The proposed algorithm is illustrated in terms of three examples appearing in the quality-control literature. The results of TRSALG compared to a gradient-based method are also presented.
TDRS orbit determination by radio interferometry
NASA Technical Reports Server (NTRS)
Pavloff, Michael S.
1994-01-01
In support of a NASA study on the application of radio interferometry to satellite orbit determination, MITRE developed a simulation tool for assessing interferometry tracking accuracy. The Orbit Determination Accuracy Estimator (ODAE) models the general batch maximum likelihood orbit determination algorithms of the Goddard Trajectory Determination System (GTDS) with the group and phase delay measurements from radio interferometry. ODAE models the statistical properties of tracking error sources, including inherent observable imprecision, atmospheric delays, clock offsets, station location uncertainty, and measurement biases, and through Monte Carlo simulation, ODAE calculates the statistical properties of errors in the predicted satellites state vector. This paper presents results from ODAE application to orbit determination of the Tracking and Data Relay Satellite (TDRS) by radio interferometry. Conclusions about optimal ground station locations for interferometric tracking of TDRS are presented, along with a discussion of operational advantages of radio interferometry.
Optimization of a Lunar Pallet Lander Reinforcement Structure Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Burt, Adam O.; Hull, Patrick V.
2014-01-01
This paper presents a design automation process using optimization via a genetic algorithm to design the conceptual structure of a Lunar Pallet Lander. The goal is to determine a design that will have the primary natural frequencies at or above a target value as well as minimize the total mass. Several iterations of the process are presented. First, a concept optimization is performed to determine what class of structure would produce suitable candidate designs. From this a stiffened sheet metal approach was selected leading to optimization of beam placement through generating a two-dimensional mesh and varying the physical location of reinforcing beams. Finally, the design space is reformulated as a binary problem using 1-dimensional beam elements to truncate the design space to allow faster convergence and additional mechanical failure criteria to be included in the optimization responses. Results are presented for each design space configuration. The final flight design was derived from these results.
NASA Technical Reports Server (NTRS)
Mennell, R. C.; Soard, T.
1974-01-01
Experimental aerodynamic investigations were conducted on a 0.0405 scale representation of the -89B space shuttle orbiter in the 7.75 x 11.00 foot low speed wind tunnel during the time period September 4 - 14, 1973. The primary test objective was to optimize the air breathing propulsion system nacelle cowl-inlet design and to determine the aerodynamic effects of this design on the orbiter stability and control characteristics. Nacelle cowl-inlet optimization was determined from total pressure - static pressure measurements obtained from pressure rakes located in the left hand nacelle pod at the engine face station. After the optimum cow-inlet design, consisting of a 7 deg cowl lip angle, short cowl, 7 deg short diverter, and a nacelle toe-in angle of 5 deg was selected, the aerodynamic effects of various locations of this design were investigated. The 3 pod - 6 Nacelle configuration was tested both underwing and overwing in three different longitudinal locations. Orbiter control effectiveness, both with and without Nacelles, was investigated at elevon deflections of 0 deg, -10 deg and +15 deg and at aileron deflections of 0 deg and +10 deg about 0 deg elevon.
Ambush frequency should increase over time during optimal predator search for prey
Alpern, Steve; Fokkink, Robbert; Timmer, Marco; Casas, Jérôme
2011-01-01
We advance and apply the mathematical theory of search games to model the problem faced by a predator searching for prey. Two search modes are available: ambush and cruising search. Some species can adopt either mode, with their choice at a given time traditionally explained in terms of varying habitat and physiological conditions. We present an additional explanation of the observed predator alternation between these search modes, which is based on the dynamical nature of the search game they are playing: the possibility of ambush decreases the propensity of the prey to frequently change locations and thereby renders it more susceptible to the systematic cruising search portion of the strategy. This heuristic explanation is supported by showing that in a new idealized search game where the predator is allowed to ambush or search at any time, and the prey can change locations at intermittent times, optimal predator play requires an alternation (or mixture) over time of ambush and cruise search. Thus, our game is an extension of the well-studied ‘Princess and Monster’ search game. Search games are zero sum games, where the pay-off is the capture time and neither the Searcher nor the Hider knows the location of the other. We are able to determine the optimal mixture of the search modes when the predator uses a mixture which is constant over time, and also to determine how the mode mixture changes over time when dynamic strategies are allowed (the ambush probability increases over time). In particular, we establish the ‘square root law of search predation’: the optimal proportion of active search equals the square root of the fraction of the region that has not yet been explored. PMID:21571944
Ambush frequency should increase over time during optimal predator search for prey.
Alpern, Steve; Fokkink, Robbert; Timmer, Marco; Casas, Jérôme
2011-11-07
We advance and apply the mathematical theory of search games to model the problem faced by a predator searching for prey. Two search modes are available: ambush and cruising search. Some species can adopt either mode, with their choice at a given time traditionally explained in terms of varying habitat and physiological conditions. We present an additional explanation of the observed predator alternation between these search modes, which is based on the dynamical nature of the search game they are playing: the possibility of ambush decreases the propensity of the prey to frequently change locations and thereby renders it more susceptible to the systematic cruising search portion of the strategy. This heuristic explanation is supported by showing that in a new idealized search game where the predator is allowed to ambush or search at any time, and the prey can change locations at intermittent times, optimal predator play requires an alternation (or mixture) over time of ambush and cruise search. Thus, our game is an extension of the well-studied 'Princess and Monster' search game. Search games are zero sum games, where the pay-off is the capture time and neither the Searcher nor the Hider knows the location of the other. We are able to determine the optimal mixture of the search modes when the predator uses a mixture which is constant over time, and also to determine how the mode mixture changes over time when dynamic strategies are allowed (the ambush probability increases over time). In particular, we establish the 'square root law of search predation': the optimal proportion of active search equals the square root of the fraction of the region that has not yet been explored.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Solikhin
2016-06-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.
A new method to estimate average hourly global solar radiation on the horizontal surface
NASA Astrophysics Data System (ADS)
Pandey, Pramod K.; Soupir, Michelle L.
2012-10-01
A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.
Foo, Cheryl P Z; Ahghari, Mahvareh; MacDonald, Russell D
2010-01-01
Traumatic injury is a leading cause of morbidity and mortality, but these can be minimized by timely transport to definite care. Helicopter emergency medical services (HEMS) provide timely transport and can influence survival. However, accident analyses indicate that landing at an unsecured landing zone (LZ), particularly at night, increases the risk of aviation accidents. To ensure safety, some HEMS operations land only at designated, secured LZs. This study utilized geographic information systems (GISs) to compare locations of scene call requests and secure LZs. The goal was to determine the optimal placement of new helipads as a strategy to improve access while mitigating the risk of aviation accidents. Call request data from a large air medical transport service were used to determine the geographic locations of all requests for scene responses in 2006. Request locations were compared with the locations of existing helipads, and straight-line distances between scene and helipad were determined using the GIS application. The application was then used to determine potential locations for new helipads. During the study period, 748 requests for scene calls and 269 helipads were available. There were 476 (52.4%) requests at least 10 kilometers from a helipad and 356 (36.6%) requests at least 15 kilometers from a helipad. One particular region, Southwestern Ontario, was identified as having the highest number of requests >15 kilometers from the closest helipad. GISs can be used to determine potential locations for new helipad construction using historical call request data. This evidence-based approach can improve HEMS access while mitigating operational risk.
Image processing occupancy sensor
Brackney, Larry J.
2016-09-27
A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.
Ayvaz, M Tamer
2010-09-20
This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Optimal multi-type sensor placement for response and excitation reconstruction
NASA Astrophysics Data System (ADS)
Zhang, C. D.; Xu, Y. L.
2016-01-01
The need to perform dynamic response reconstruction always arises as the measurement of structural response is often limited to a few locations, especially for a large civil structure. Besides, it is usually very difficult, if not impossible, to measure external excitations under the operation condition of a structure. This study presents an algorithm for optimal placement of multi-type sensors, including strain gauges, displacement transducers and accelerometers, for the best reconstruction of responses of key structural components where there are no sensors installed and the best estimation of external excitations acting on the structure at the same time. The algorithm is developed in the framework of Kalman filter with unknown excitation, in which minimum-variance unbiased estimates of the generalized state of the structure and the external excitations are obtained by virtue of limited sensor measurements. The structural responses of key locations without sensors can then be reconstructed with the estimated generalized state and excitation. The asymptotic stability feature of the filter is utilized for optimal sensor placement. The number and spatial location of the multi-type sensors are determined by adding the optimal sensor which gains the maximal reduction of the estimation error of reconstructed responses. For the given mode number in response reconstruction and the given locations of external excitations, the optimal multi-sensor placement achieved by the proposed method is independent of the type and time evolution of external excitation. A simply-supported overhanging steel beam under multiple types of excitation is numerically studied to demonstrate the feasibility and superiority of the proposed method, and the experimental work is then carried out to testify the effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekmekcioglu, Mehmet, E-mail: meceng3584@yahoo.co; Kaya, Tolga; Kahraman, Cengiz
The use of fuzzy multiple criteria analysis (MCA) in solid waste management has the advantage of rendering subjective and implicit decision making more objective and analytical, with its ability to accommodate both quantitative and qualitative data. In this paper a modified fuzzy TOPSIS methodology is proposed for the selection of appropriate disposal method and site for municipal solid waste (MSW). Our method is superior to existing methods since it has capability of representing vague qualitative data and presenting all possible results with different degrees of membership. In the first stage of the proposed methodology, a set of criteria of cost,more » reliability, feasibility, pollution and emission levels, waste and energy recovery is optimized to determine the best MSW disposal method. Landfilling, composting, conventional incineration, and refuse-derived fuel (RDF) combustion are the alternatives considered. The weights of the selection criteria are determined by fuzzy pairwise comparison matrices of Analytic Hierarchy Process (AHP). It is found that RDF combustion is the best disposal method alternative for Istanbul. In the second stage, the same methodology is used to determine the optimum RDF combustion plant location using adjacent land use, climate, road access and cost as the criteria. The results of this study illustrate the importance of the weights on the various factors in deciding the optimized location, with the best site located in Catalca. A sensitivity analysis is also conducted to monitor how sensitive our model is to changes in the various criteria weights.« less
Reliability Constrained Priority Load Shedding for Aerospace Power System Automation
NASA Technical Reports Server (NTRS)
Momoh, James A.; Zhu, Jizhong; Kaddah, Sahar S.; Dolce, James L. (Technical Monitor)
2000-01-01
The need for improving load shedding on board the space station is one of the goals of aerospace power system automation. To accelerate the optimum load-shedding functions, several constraints must be involved. These constraints include congestion margin determined by weighted probability contingency, component/system reliability index, generation rescheduling. The impact of different faults and indices for computing reliability were defined before optimization. The optimum load schedule is done based on priority, value and location of loads. An optimization strategy capable of handling discrete decision making, such as Everett optimization, is proposed. We extended Everett method to handle expected congestion margin and reliability index as constraints. To make it effective for real time load dispatch process, a rule-based scheme is presented in the optimization method. It assists in selecting which feeder load to be shed, the location of the load, the value, priority of the load and cost benefit analysis of the load profile is included in the scheme. The scheme is tested using a benchmark NASA system consisting of generators, loads and network.
3D Building Evacuation Route Modelling and Visualization
NASA Astrophysics Data System (ADS)
Chan, W.; Armenakis, C.
2014-11-01
The most common building evacuation approach currently applied is to have evacuation routes planned prior to these emergency events. These routes are usually the shortest and most practical path from each building room to the closest exit. The problem with this approach is that it is not adaptive. It is not responsively configurable relative to the type, intensity, or location of the emergency risk. Moreover, it does not provide any information to the affected persons or to the emergency responders while not allowing for the review of simulated hazard scenarios and alternative evacuation routes. In this paper we address two main tasks. The first is the modelling of the spatial risk caused by a hazardous event leading to choosing the optimal evacuation route for a set of options. The second is to generate a 3D visual representation of the model output. A multicriteria decision making (MCDM) approach is used to model the risk aiming at finding the optimal evacuation route. This is achieved by using the analytical hierarchy process (AHP) on the criteria describing the different alternative evacuation routes. The best route is then chosen to be the alternative with the least cost. The 3D visual representation of the model displays the building, the surrounding environment, the evacuee's location, the hazard location, the risk areas and the optimal evacuation pathway to the target safety location. The work has been performed using ESRI's ArcGIS. Using the developed models, the user can input the location of the hazard and the location of the evacuee. The system then determines the optimum evacuation route and displays it in 3D.
Research on Knowledge-Based Optimization Method of Indoor Location Based on Low Energy Bluetooth
NASA Astrophysics Data System (ADS)
Li, C.; Li, G.; Deng, Y.; Wang, T.; Kang, Z.
2017-09-01
With the rapid development of LBS (Location-based Service), the demand for commercialization of indoor location has been increasing, but its technology is not perfect. Currently, the accuracy of indoor location, the complexity of the algorithm, and the cost of positioning are hard to be simultaneously considered and it is still restricting the determination and application of mainstream positioning technology. Therefore, this paper proposes a method of knowledge-based optimization of indoor location based on low energy Bluetooth. The main steps include: 1) The establishment and application of a priori and posterior knowledge base. 2) Primary selection of signal source. 3) Elimination of positioning gross error. 4) Accumulation of positioning knowledge. The experimental results show that the proposed algorithm can eliminate the signal source of outliers and improve the accuracy of single point positioning in the simulation data. The proposed scheme is a dynamic knowledge accumulation rather than a single positioning process. The scheme adopts cheap equipment and provides a new idea for the theory and method of indoor positioning. Moreover, the performance of the high accuracy positioning results in the simulation data shows that the scheme has a certain application value in the commercial promotion.
Cuckoo search via Levy flights applied to uncapacitated facility location problem
NASA Astrophysics Data System (ADS)
Mesa, Armacheska; Castromayor, Kris; Garillos-Manliguez, Cinmayii; Calag, Vicente
2017-11-01
Facility location problem (FLP) is a mathematical way to optimally locate facilities within a set of candidates to satisfy the requirements of a given set of clients. This study addressed the uncapacitated FLP as it assures that the capacity of every selected facility is finite. Thus, even if the demand is not known, which often is the case, in reality, organizations may still be able to take strategic decisions such as locating the facilities. There are different approaches relevant to the uncapacitated FLP. Here, the cuckoo search via Lévy flight (CS-LF) was used to solve the problem. Though hybrid methods produce better results, this study employed CS-LF to determine first its potential in finding solutions for the problem, particularly when applied to a real-world problem. The method was applied to the data set obtained from a department store in Davao City, Philippines. Results showed that applying CS-LF yielded better facility locations compared to particle swarm optimization and other existing algorithms. Although these results showed that CS-LF is a promising method to solve this particular problem, further studies on other FLP are recommended to establish a strong foundation of the capability of CS-LF in solving FLP.
Optimal power distribution for minimizing pupil walk in a 7.5X afocal zoom lens
NASA Astrophysics Data System (ADS)
Song, Wanyue; Zhao, Yang; Berman, Rebecca; Bodell, S. Yvonne; Fennig, Eryn; Ni, Yunhui; Papa, Jonathan C.; Yang, Tianyi; Yee, Anthony J.; Moore, Duncan T.; Bentley, Julie L.
2017-11-01
An extensive design study was conducted to find the best optimal power distribution and stop location for a 7.5x afocal zoom lens that controls the pupil walk and pupil location through zoom. This afocal zoom lens is one of the three components in a VIS-SWIR high-resolution microscope for inspection of photonic chips. The microscope consists of an afocal zoom, a nine-element objective and a tube lens and has diffraction limited performance with zero vignetting. In this case, the required change in object (sample) size and resolution is achieved by the magnification change of the afocal component. This creates strict requirements for both the entrance and exit pupil locations of the afocal zoom to couple the two sides successfully. The first phase of the design study looked at conventional four group zoom lenses with positive groups in the front and back and the stop at a fixed location outside the lens but resulted in significant pupil walk. The second phase of the design study focused on several promising unconventional four-group power distribution designs with moving stops that minimized pupil walk and had an acceptable pupil location (as determined by the objective and tube lens).
Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo
2017-01-01
In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human planner intervention. A comparison of the results with the optimized solution obtained using a similar optimization model but with human planner intervention revealed that the proposed algorithm produced optimized plans superior to that developed using the manual plan. The proposed algorithm can generate admissible solutions within reasonable computational times and can be used to develop fully automated IMRT treatment planning methods, thus reducing human planners' workloads during iterative processes. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Lightning location system supervising Swedish power transmission network
NASA Technical Reports Server (NTRS)
Melin, Stefan A.
1991-01-01
For electric utilities, the ability to prevent or minimize lightning damage on personnel and power systems is of great importance. Therefore, the Swedish State Power Board, has been using data since 1983 from a nationwide lightning location system (LLS) for accurately locating lightning ground strikes. Lightning data is distributed and presented on color graphic displays at regional power network control centers as well as at the national power system control center for optimal data use. The main objectives for use of LLS data are: supervising the power system for optimal and safe use of the transmission and generating capacity during periods of thunderstorms; warning service to maintenance and service crews at power line and substations to end operations hazardous when lightning; rapid positioning of emergency crews to locate network damage at areas of detected lightning; and post analysis of power outages and transmission faults in relation to lightning, using archived lightning data for determination of appropriate design and insulation levels of equipment. Staff have found LLS data useful and economically justified since the availability of power system has increased as well as level of personnel safety.
Wen, Tzai-Hung
2014-01-01
Immediate treatment with an automated external defibrillator (AED) increases out-of-hospital cardiac arrest (OHCA) patient survival potential. While considerable attention has been given to determining optimal public AED locations, spatial and temporal factors such as time of day and distance from emergency medical services (EMSs) are understudied. Here we describe a geocomputational genetic algorithm with a new stirring operator (GANSO) that considers spatial and temporal cardiac arrest occurrence factors when assessing the feasibility of using Taipei 7-Eleven stores as installation locations for AEDs. Our model is based on two AED conveyance modes, walking/running and driving, involving service distances of 100 and 300 meters, respectively. Our results suggest different AED allocation strategies involving convenience stores in urban settings. In commercial areas, such installations can compensate for temporal gaps in EMS locations when responding to nighttime OHCA incidents. In residential areas, store installations can compensate for long distances from fire stations, where AEDs are currently held in Taipei. PMID:25045396
An Optimized Configuration for the Brazilian Decimetric Array
NASA Astrophysics Data System (ADS)
Sawant, Hanumant; Faria, Claudio; Stephany, Stephan
The Brazilian Decimetric Array (BDA) is a radio interferometer designed to operate in the frequency range of 1.2-1.7, 2.8 and 5.6 GHz and to obtain images of radio sources with high dynamic range. A 5-antenna configuration is already operational being implemented in BDA phase I. Phase II will provide a 26-antenna configuration forming a compact T-array, whereas phase III will include further 12 antennas. However, the BDA site has topographic constraints that preclude the placement of these antennas along the lines defined by the 3 arms of the T-array. Therefore, some antennas must be displaced in a direction that is slightly transverse tothese lines. This work presents the investigation of possible optimized configurations for all 38 antennas spread over the distances of 2.5 x 1.25 km. It was required to determine the optimal position of the last 12 antennas.A new optimization strategy was then proposed in order to obtain the optimal array configuration. It is based on the entropy of the distribution of the sampled points in the Fourier plane. A stochastic model, Ant Colony Optimization, uses the entropy of the such distribution to iteratively refine the candidate solutions. The proposed strategy can be used to determine antenna locations for free-shape arrays in order to provide uniform u-v coverage with minimum redundancy of sampled points in u-v plane that are less susceptible to errors due to unmeasured Fourier components. A different distribution could be chosen for the coverage. It also allows to consider the topographical constraints of the available site. Furthermore, it provides an optimal configuration even considering the predetermined placement of the 26 antennas that compose the central T-array. In this case, the optimal location of the last 12 antennas was determined. Performance results corresponding to the Fourier plane coverage, synthesized beam and sidelobes levels are shown for this optimized BDA configuration and are compared to the results of the standard T-array configuration that cannot be implemented due to site constraints. —————————————————————————————-
Recourse-based facility-location problems in hybrid uncertain environment.
Wang, Shuming; Watada, Junzo; Pedrycz, Witold
2010-08-01
The objective of this paper is to study facility-location problems in the presence of a hybrid uncertain environment involving both randomness and fuzziness. A two-stage fuzzy-random facility-location model with recourse (FR-FLMR) is developed in which both the demands and costs are assumed to be fuzzy-random variables. The bounds of the optimal objective value of the two-stage FR-FLMR are derived. As, in general, the fuzzy-random parameters of the FR-FLMR can be regarded as continuous fuzzy-random variables with an infinite number of realizations, the computation of the recourse requires solving infinite second-stage programming problems. Owing to this requirement, the recourse function cannot be determined analytically, and, hence, the model cannot benefit from the use of techniques of classical mathematical programming. In order to solve the location problems of this nature, we first develop a technique of fuzzy-random simulation to compute the recourse function. The convergence of such simulation scenarios is discussed. In the sequel, we propose a hybrid mutation-based binary ant-colony optimization (MBACO) approach to the two-stage FR-FLMR, which comprises the fuzzy-random simulation and the simplex algorithm. A numerical experiment illustrates the application of the hybrid MBACO algorithm. The comparison shows that the hybrid MBACO finds better solutions than the one using other discrete metaheuristic algorithms, such as binary particle-swarm optimization, genetic algorithm, and tabu search.
Connectivity, Coverage and Placement in Wireless Sensor Networks
Li, Ji; Andrew, Lachlan L.H.; Foh, Chuan Heng; Zukerman, Moshe; Chen, Hsiao-Hwa
2009-01-01
Wireless communication between sensors allows the formation of flexible sensor networks, which can be deployed rapidly over wide or inaccessible areas. However, the need to gather data from all sensors in the network imposes constraints on the distances between sensors. This survey describes the state of the art in techniques for determining the minimum density and optimal locations of relay nodes and ordinary sensors to ensure connectivity, subject to various degrees of uncertainty in the locations of the nodes. PMID:22408474
A Linear Programming Approach for Determining Travel Cost Minimizing ECSS Training Locations
2010-03-01
can accommodate the simultaneous training of many personnel. 1. Gulfport CRTC, Mississippi 2. Savannah CRTC, Georgia 3. Alpena CRTC, Michigan 4...327 224 Savannah CRTC 266 182 Alpena CRTC 493 343 Volk Field CRTC 136 91 Hill ALC 210 147 Hanscom ALC 414 287 Tinker ALC 620 434 Robins ALC 1332...Location Frequency within Optimal Solutions across All Phases 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Gulfport CRTC Savannah CRTC Alpena CRTC Volk
Optimizing correlation techniques for improved earthquake location
Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.
2004-01-01
Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.
Developing a discrete event simulation model for university student shuttle buses
NASA Astrophysics Data System (ADS)
Zulkepli, Jafri; Khalid, Ruzelan; Nawawi, Mohd Kamal Mohd; Hamid, Muhammad Hafizan
2017-11-01
Providing shuttle buses for university students to attend their classes is crucial, especially when their number is large and the distances between their classes and residential halls are far. These factors, in addition to the non-optimal current bus services, typically require the students to wait longer which eventually opens a space for them to complain. To considerably reduce the waiting time, providing the optimal number of buses to transport them from location to location and the effective route schedules to fulfil the students' demand at relevant time ranges are thus important. The optimal bus number and schedules are to be determined and tested using a flexible decision platform. This paper thus models the current services of student shuttle buses in a university using a Discrete Event Simulation approach. The model can flexibly simulate whatever changes configured to the current system and report its effects to the performance measures. How the model was conceptualized and formulated for future system configurations are the main interest of this paper.
The km 3 Mediterranean neutrino observatory - the NEMO.RD project
NASA Astrophysics Data System (ADS)
De Marzo, C. N.
2001-05-01
The NEMO.RD Project is a feasibility study of a km 3 underwater telescope for high energy astrophysical neutrinos to be located in the Mediterranean Sea. Results on various issues of this project are presented on: i) Monte Carlo simulation study of the capabilities of various arrays of phototubes in order to determine the detector geometry that can optimize performance and cost; ii) oceanographic survey of various sites in search of the optimal one; iii) feasibility study of mechanics, deployment, connections and maintenance of such a detector. Parameters of a site near Capo Passero, Sicily, where depth, transparency and other water parameters seem optimal are shown.
Optimization of hydrometric monitoring network in urban drainage systems using information theory.
Yazdi, J
2017-10-01
Regular and continuous monitoring of urban runoff in both quality and quantity aspects is of great importance for controlling and managing surface runoff. Due to the considerable costs of establishing new gauges, optimization of the monitoring network is essential. This research proposes an approach for site selection of new discharge stations in urban areas, based on entropy theory in conjunction with multi-objective optimization tools and numerical models. The modeling framework provides an optimal trade-off between the maximum possible information content and the minimum shared information among stations. This approach was applied to the main surface-water collection system in Tehran to determine new optimal monitoring points under the cost considerations. Experimental results on this drainage network show that the obtained cost-effective designs noticeably outperform the consulting engineers' proposal in terms of both information contents and shared information. The research also determined the highly frequent sites at the Pareto front which might be important for decision makers to give a priority for gauge installation on those locations of the network.
Geospatial Analytics in Retail Site Selection and Sales Prediction.
Ting, Choo-Yee; Ho, Chiung Ching; Yee, Hui Jia; Matsah, Wan Razali
2018-03-01
Studies have shown that certain features from geography, demography, trade area, and environment can play a vital role in retail site selection, largely due to the impact they asserted on retail performance. Although the relevant features could be elicited by domain experts, determining the optimal feature set can be intractable and labor-intensive exercise. The challenges center around (1) how to determine features that are important to a particular retail business and (2) how to estimate retail sales performance given a new location? The challenges become apparent when the features vary across time. In this light, this study proposed a nonintervening approach by employing feature selection algorithms and subsequently sales prediction through similarity-based methods. The results of prediction were validated by domain experts. In this study, data sets from different sources were transformed and aggregated before an analytics data set that is ready for analysis purpose could be obtained. The data sets included data about feature location, population count, property type, education status, and monthly sales from 96 branches of a telecommunication company in Malaysia. The finding suggested that (1) optimal retail performance can only be achieved through fulfillment of specific location features together with the surrounding trade area characteristics and (2) similarity-based method can provide solution to retail sales prediction.
NASA Astrophysics Data System (ADS)
Hoskins, Aaron B.
Forest fires cause a significant amount of damage and destruction each year. Optimally dispatching resources reduces the amount of damage a forest fire can cause. Models predict the fire spread to provide the data required to optimally dispatch resources. However, the models are only as accurate as the data used to build them. Satellites are one valuable tool in the collection of data for the forest fire models. Satellites provide data on the types of vegetation, the wind speed and direction, the soil moisture content, etc. The current operating paradigm is to passively collect data when possible. However, images from directly overhead provide better resolution and are easier to process. Maneuvering a constellation of satellites to fly directly over the forest fire provides higher quality data than is achieved with the current operating paradigm. Before launch, the location of the forest fire is unknown. Therefore, it is impossible to optimize the initial orbits for the satellites. Instead, the expected cost of maneuvering to observe the forest fire determines the optimal initial orbits. A two-stage stochastic programming approach is well suited for this class of problem where initial decisions are made with an uncertain future and then subsequent decisions are made once a scenario is realized. A repeat ground track orbit provides a non-maneuvering, natural solution providing a daily flyover of the forest fire. However, additional maneuvers provide a second daily flyover of the forest fire. The additional maneuvering comes at a significant cost in terms of additional fuel, but provides more data collection opportunities. After data are collected, ground stations receive the data for processing. Optimally selecting the ground station locations reduce the number of built ground stations and reduces the data fusion issues. However, the location of the forest fire alters the optimal ground station sites. A two-stage stochastic programming approach optimizes the selection of ground stations to maximize the expected amount of data downloaded from a satellite. The approaches of selecting initial orbits and ground station locations including uncertainty will provide a robust system to reduce the amount of damage caused by forest fires.
Frimodt-Møller, Jakob; Charbon, Godefroid; Krogfelt, Karen A; Løbner-Olesen, Anders
2017-09-11
The optimal chromosomal position(s) of a given DNA element was/were determined by transposon-mediated random insertion followed by fitness selection. In bacteria, the impact of the genetic context on the function of a genetic element can be difficult to assess. Several mechanisms, including topological effects, transcriptional interference from neighboring genes, and/or replication-associated gene dosage, may affect the function of a given genetic element. Here, we describe a method that permits the random integration of a DNA element into the chromosome of Escherichia coli and select the most favorable locations using a simple growth competition experiment. The method takes advantage of a well-described transposon-based system of random insertion, coupled with a selection of the fittest clone(s) by growth advantage, a procedure that is easily adjustable to experimental needs. The nature of the fittest clone(s) can be determined by whole-genome sequencing on a complex multi-clonal population or by easy gene walking for the rapid identification of selected clones. Here, the non-coding DNA region DARS2, which controls the initiation of chromosome replication in E. coli, was used as an example. The function of DARS2 is known to be affected by replication-associated gene dosage; the closer DARS2 gets to the origin of DNA replication, the more active it becomes. DARS2 was randomly inserted into the chromosome of a DARS2-deleted strain. The resultant clones containing individual insertions were pooled and competed against one another for hundreds of generations. Finally, the fittest clones were characterized and found to contain DARS2 inserted in close proximity to the original DARS2 location.
Designing optimal greenhouse gas monitoring networks for Australia
NASA Astrophysics Data System (ADS)
Ziehn, T.; Law, R. M.; Rayner, P. J.; Roff, G.
2016-01-01
Atmospheric transport inversion is commonly used to infer greenhouse gas (GHG) flux estimates from concentration measurements. The optimal location of ground-based observing stations that supply these measurements can be determined by network design. Here, we use a Lagrangian particle dispersion model (LPDM) in reverse mode together with a Bayesian inverse modelling framework to derive optimal GHG observing networks for Australia. This extends the network design for carbon dioxide (CO2) performed by Ziehn et al. (2014) to also minimise the uncertainty on the flux estimates for methane (CH4) and nitrous oxide (N2O), both individually and in a combined network using multiple objectives. Optimal networks are generated by adding up to five new stations to the base network, which is defined as two existing stations, Cape Grim and Gunn Point, in southern and northern Australia respectively. The individual networks for CO2, CH4 and N2O and the combined observing network show large similarities because the flux uncertainties for each GHG are dominated by regions of biologically productive land. There is little penalty, in terms of flux uncertainty reduction, for the combined network compared to individually designed networks. The location of the stations in the combined network is sensitive to variations in the assumed data uncertainty across locations. A simple assessment of economic costs has been included in our network design approach, considering both establishment and maintenance costs. Our results suggest that, while site logistics change the optimal network, there is only a small impact on the flux uncertainty reductions achieved with increasing network size.
Fuzzy multicriteria disposal method and site selection for municipal solid waste.
Ekmekçioğlu, Mehmet; Kaya, Tolga; Kahraman, Cengiz
2010-01-01
The use of fuzzy multiple criteria analysis (MCA) in solid waste management has the advantage of rendering subjective and implicit decision making more objective and analytical, with its ability to accommodate both quantitative and qualitative data. In this paper a modified fuzzy TOPSIS methodology is proposed for the selection of appropriate disposal method and site for municipal solid waste (MSW). Our method is superior to existing methods since it has capability of representing vague qualitative data and presenting all possible results with different degrees of membership. In the first stage of the proposed methodology, a set of criteria of cost, reliability, feasibility, pollution and emission levels, waste and energy recovery is optimized to determine the best MSW disposal method. Landfilling, composting, conventional incineration, and refuse-derived fuel (RDF) combustion are the alternatives considered. The weights of the selection criteria are determined by fuzzy pairwise comparison matrices of Analytic Hierarchy Process (AHP). It is found that RDF combustion is the best disposal method alternative for Istanbul. In the second stage, the same methodology is used to determine the optimum RDF combustion plant location using adjacent land use, climate, road access and cost as the criteria. The results of this study illustrate the importance of the weights on the various factors in deciding the optimized location, with the best site located in Catalca. A sensitivity analysis is also conducted to monitor how sensitive our model is to changes in the various criteria weights. 2010 Elsevier Ltd. All rights reserved.
Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong
2018-05-01
Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.
Sasaki, Satoshi; Comber, Alexis J; Suzuki, Hiroshi; Brunsdon, Chris
2010-01-28
Ambulance response time is a crucial factor in patient survival. The number of emergency cases (EMS cases) requiring an ambulance is increasing due to changes in population demographics. This is decreasing ambulance response times to the emergency scene. This paper predicts EMS cases for 5-year intervals from 2020, to 2050 by correlating current EMS cases with demographic factors at the level of the census area and predicted population changes. It then applies a modified grouping genetic algorithm to compare current and future optimal locations and numbers of ambulances. Sets of potential locations were evaluated in terms of the (current and predicted) EMS case distances to those locations. Future EMS demands were predicted to increase by 2030 using the model (R2 = 0.71). The optimal locations of ambulances based on future EMS cases were compared with current locations and with optimal locations modelled on current EMS case data. Optimising the location of ambulance stations locations reduced the average response times by 57 seconds. Current and predicted future EMS demand at modelled locations were calculated and compared. The reallocation of ambulances to optimal locations improved response times and could contribute to higher survival rates from life-threatening medical events. Modelling EMS case 'demand' over census areas allows the data to be correlated to population characteristics and optimal 'supply' locations to be identified. Comparing current and future optimal scenarios allows more nuanced planning decisions to be made. This is a generic methodology that could be used to provide evidence in support of public health planning and decision making.
Optimal placement of tuning masses for vibration reduction in helicopter rotor blades
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1988-01-01
Described are methods for reducing vibration in helicopter rotor blades by determining optimum sizes and locations of tuning masses through formal mathematical optimization techniques. An optimization procedure is developed which employs the tuning masses and corresponding locations as design variables which are systematically changed to achieve low values of shear without a large mass penalty. The finite-element structural analysis of the blade and the optimization formulation require development of discretized expressions for two performance parameters: modal shaping parameter and modal shear amplitude. Matrix expressions for both quantities and their sensitivity derivatives are developed. Three optimization strategies are developed and tested. The first is based on minimizing the modal shaping parameter which indirectly reduces the modal shear amplitudes corresponding to each harmonic of airload. The second strategy reduces these amplitudes directly, and the third strategy reduces the shear as a function of time during a revolution of the blade. The first strategy works well for reducing the shear for one mode responding to a single harmonic of the airload, but has been found in some cases to be ineffective for more than one mode. The second and third strategies give similar results and show excellent reduction of the shear with a low mass penalty.
Location-allocation models and new solution methodologies in telecommunication networks
NASA Astrophysics Data System (ADS)
Dinu, S.; Ciucur, V.
2016-08-01
When designing a telecommunications network topology, three types of interdependent decisions are combined: location, allocation and routing, which are expressed by the following design considerations: how many interconnection devices - consolidation points/concentrators should be used and where should they be located; how to allocate terminal nodes to concentrators; how should the voice, video or data traffic be routed and what transmission links (capacitated or not) should be built into the network. Including these three components of the decision into a single model generates a problem whose complexity makes it difficult to solve. A first method to address the overall problem is the sequential one, whereby the first step deals with the location-allocation problem and based on this solution the subsequent sub-problem (routing the network traffic) shall be solved. The issue of location and allocation in a telecommunications network, called "The capacitated concentrator location- allocation - CCLA problem" is based on one of the general location models on a network in which clients/demand nodes are the terminals and facilities are the concentrators. Like in a location model, each client node has a demand traffic, which must be served, and the facilities can serve these demands within their capacity limit. In this study, the CCLA problem is modeled as a single-source capacitated location-allocation model whose optimization objective is to determine the minimum network cost consisting of fixed costs for establishing the locations of concentrators, costs for operating concentrators and costs for allocating terminals to concentrators. The problem is known as a difficult combinatorial optimization problem for which powerful algorithms are required. Our approach proposes a Fuzzy Genetic Algorithm combined with a local search procedure to calculate the optimal values of the location and allocation variables. To confirm the efficiency of the proposed algorithm with respect to the quality of solutions, significant size test problems were considered: up to 100 terminal nodes and 50 concentrators on a 100 × 100 square grid. The performance of this hybrid intelligent algorithm was evaluated by measuring the quality of its solutions with respect to the following statistics: the standard deviation and the ratio of the best solution obtained.
Robust optimization-based DC optimal power flow for managing wind generation uncertainty
NASA Astrophysics Data System (ADS)
Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn
2012-11-01
Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.
Locations of Sampling Stations for Water Quality Monitoring in Water Distribution Networks.
Rathi, Shweta; Gupta, Rajesh
2014-04-01
Water quality is required to be monitored in the water distribution networks (WDNs) at salient locations to assure the safe quality of water supplied to the consumers. Such monitoring stations (MSs) provide warning against any accidental contaminations. Various objectives like demand coverage, time for detection, volume of water contaminated before detection, extent of contamination, expected population affected prior to detection, detection likelihood and others, have been independently or jointly considered in determining optimal number and location of MSs in WDNs. "Demand coverage" defined as the percentage of network demand monitored by a particular monitoring station is a simple measure to locate MSs. Several methods based on formulation of coverage matrix using pre-specified coverage criteria and optimization have been suggested. Coverage criteria is defined as some minimum percentage of total flow received at the monitoring stations that passed through any upstream node included then as covered node of the monitoring station. Number of monitoring stations increases with the increase in the value of coverage criteria. Thus, the design of monitoring station becomes subjective. A simple methodology is proposed herein which priority wise iteratively selects MSs to achieve targeted demand coverage. The proposed methodology provided the same number and location of MSs for illustrative network as an optimization method did. Further, the proposed method is simple and avoids subjectivity that could arise from the consideration of coverage criteria. The application of methodology is also shown on a WDN of Dharampeth zone (Nagpur city WDN in Maharashtra, India) having 285 nodes and 367 pipes.
OPTIMAL WELL LOCATOR (OWL): A SCREENING TOOL FOR EVALUATING LOCATIONS OF MONITORING WELLS
The Optimal Well Locator ( OWL) program was designed and developed by USEPA to be a screening tool to evaluate and optimize the placement of wells in long term monitoring networks at small sites. The first objective of the OWL program is to allow the user to visualize the change ...
Multi-objective trajectory optimization for the space exploration vehicle
NASA Astrophysics Data System (ADS)
Qin, Xiaoli; Xiao, Zhen
2016-07-01
The research determines temperature-constrained optimal trajectory for the space exploration vehicle by developing an optimal control formulation and solving it using a variable order quadrature collocation method with a Non-linear Programming(NLP) solver. The vehicle is assumed to be the space reconnaissance aircraft that has specified takeoff/landing locations, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom aircraft model is adapted from previous work and includes flight dynamics, and thermal constraints.Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and exploration of space targets. In addition, the vehicle models include the environmental models(gravity and atmosphere). How these models are appropriately employed is key to gaining confidence in the results and conclusions of the research. Optimal trajectories are developed using several performance costs in the optimal control formation,minimum time,minimum time with control penalties,and maximum distance.The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for large-scale space exloration.
An Application of Multivariate Generalizability in Selection of Mathematically Gifted Students
ERIC Educational Resources Information Center
Kim, Sungyeun; Berebitsky, Dan
2016-01-01
This study investigates error sources and the effects of each error source to determine optimal weights of the composite score of teacher recommendation letters and self-introduction letters using multivariate generalizability theory. Data were collected from the science education institute for the gifted attached to the university located within…
ERIC Educational Resources Information Center
Murphy, P. Karen; Firetto, Carla M.; Wei, Liwei; Li, Mengyi; Croninger, Rachel M. V.
2016-01-01
Many American students struggle to perform even basic comprehension of text, such as locating information, determining the main idea, or supporting details of a story. Even more students are inadequately prepared to complete more complex tasks, such as critically or analytically interpreting information in text or making reasoned decisions from…
Childhood Brain and Spinal Cord Tumors Treatment Overview (PDQ®)—Health Professional Version
Treatment for children with brain and spinal cord tumors is based on histology and location within the brain. For most of these tumors, an optimal regimen has not been determined, and enrollment onto clinical trials is encouraged. Get detailed information about these tumors in this clinician summary.
ERIC Educational Resources Information Center
Barker, Lauren N.; Ziino, Carlo
2010-01-01
This study aimed to produce indicators and guidelines for clinician use in determining whether individual therapy sessions for community rehabilitation services should be delivered in a home/community-based setting or centre-based setting within a flexible service delivery model. Concept mapping techniques as described by Tochrim and Kane (2005)…
Determination of As in tree-rings of poplar (Populus alba L.) by U-shaped DC arc.
Marković, D M; Novović, I; Vilotić, D; Ignjatović, Lj
2009-04-01
An argon-stabilized U-shaped DC arc with a system for aerosol introduction was used for determination of As in poplar (Populus alba L.) tree-rings. After optimization of the operating parameters and selection of the most appropriate signal integration time (30 s), the limit of detection for As was reduced to 15.0 ng/mL. This detection limit obtained with the optimal integration time was compared with those for other methods: inductively coupled plasma-atomic emission spectrometry (ICP-AES), direct coupled plasma-atomic emission spectrometry (DCP-AES), microwave induced plasma-atomic emission spectrometry (MIP-AES) and improved thermospray flame furnace atomic absorption spectrometry (TS-FF-AAS). Arsenic is toxic trace element which can adversely affect plant, animal and human health. As an indicator of environment pollution we collected poplar tree-rings from two locations. The first area was close to the "Nikola Tesla" (TENT-A) power plant, Obrenovac, while the other was in the urban area of Novi Sad. In all cases elevated average concentrations of As were registered in poplar tree-rings from the Obrenovac location.
NASA Astrophysics Data System (ADS)
Kumar, Rakesh; Chandrawat, Rajesh Kumar; Garg, B. P.; Joshi, Varun
2017-07-01
Opening the new firm or branch with desired execution is very relevant to facility location problem. Along the lines to locate the new ambulances and firehouses, the government desires to minimize average response time for emergencies from all residents of cities. So finding the best location is biggest challenge in day to day life. These type of problems were named as facility location problems. A lot of algorithms have been developed to handle these problems. In this paper, we review five algorithms that were applied to facility location problems. The significance of clustering in facility location problems is also presented. First we compare Fuzzy c-means clustering (FCM) algorithm with alternating heuristic (AH) algorithm, then with Particle Swarm Optimization (PSO) algorithms using different type of distance function. The data was clustered with the help of FCM and then we apply median model and min-max problem model on that data. After finding optimized locations using these algorithms we find the distance from optimized location point to the demanded point with different distance techniques and compare the results. At last, we design a general example to validate the feasibility of the five algorithms for facilities location optimization, and authenticate the advantages and drawbacks of them.
Using geostatistics to evaluate cleanup goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcon, M.F.; Hopkins, L.P.
1995-12-01
Geostatistical analysis is a powerful predictive tool typically used to define spatial variability in environmental data. The information from a geostatistical analysis using kriging, a geostatistical. tool, can be taken a step further to optimize sampling location and frequency and help quantify sampling uncertainty in both the remedial investigation and remedial design at a hazardous waste site. Geostatistics were used to quantify sampling uncertainty in attainment of a risk-based cleanup goal and determine the optimal sampling frequency necessary to delineate the horizontal extent of impacted soils at a Gulf Coast waste site.
NASA Astrophysics Data System (ADS)
Li, Min; Yuan, Yunbin; Zhang, Baocheng; Wang, Ningbo; Li, Zishen; Liu, Xifeng; Zhang, Xiao
2018-02-01
The ionosphere effective height (IEH) is a very important parameter in total electron content (TEC) measurements under the widely used single-layer model assumption. To overcome the requirement of a large amount of simultaneous vertical and slant ionospheric observations or dense "coinciding" pierce points data, a new approach comparing the converted vertical TEC (VTEC) value using mapping function based on a given IEH with the "ground truth" VTEC value provided by the combined International GNSS Service Global Ionospheric Maps is proposed for the determination of the optimal IEH. The optimal IEH in the Chinese region is determined using three different methods based on GNSS data. Based on the ionosonde data from three different locations in China, the altitude variation of the peak electron density (hmF2) is found to have clear diurnal, seasonal and latitudinal dependences, and the diurnal variation of hmF2 varies from approximately 210 to 520 km in Hainan. The determination of the optimal IEH employing the inverse method suggested by Birch et al. (Radio Sci 37, 2002. doi: 10.1029/2000rs002601) did not yield a consistent altitude in the Chinese region. Tests of the method minimizing the mapping function errors suggested by Nava et al. (Adv Space Res 39:1292-1297, 2007) indicate that the optimal IEH ranges from 400 to 600 km, and the height of 450 km is the most frequent IEH at both high and low solar activities. It is also confirmed that the IEH of 450-550 km is preferred for the Chinese region instead of the commonly adopted 350-450 km using the determination method of the optimal IEH proposed in this paper.
Visual-search models for location-known detection tasks
NASA Astrophysics Data System (ADS)
Gifford, H. C.; Karbaschi, Z.; Banerjee, K.; Das, M.
2017-03-01
Lesion-detection studies that analyze a fixed target position are generally considered predictive of studies involving lesion search, but the extent of the correlation often goes untested. The purpose of this work was to develop a visual-search (VS) model observer for location-known tasks that, coupled with previous work on localization tasks, would allow efficient same-observer assessments of how search and other task variations can alter study outcomes. The model observer featured adjustable parameters to control the search radius around the fixed lesion location and the minimum separation between suspicious locations. Comparisons were made against human observers, a channelized Hotelling observer and a nonprewhitening observer with eye filter in a two-alternative forced-choice study with simulated lumpy background images containing stationary anatomical and quantum noise. These images modeled single-pinhole nuclear medicine scans with different pinhole sizes. When the VS observer's search radius was optimized with training images, close agreement was obtained with human-observer results. Some performance differences between the humans could be explained by varying the model observer's separation parameter. The range of optimal pinhole sizes identified by the VS observer was in agreement with the range determined with the channelized Hotelling observer.
Choosing Sensor Configuration for a Flexible Structure Using Full Control Synthesis
NASA Technical Reports Server (NTRS)
Lind, Rick; Nalbantoglu, Volkan; Balas, Gary
1997-01-01
Optimal locations and types for feedback sensors which meet design constraints and control requirements are difficult to determine. This paper introduces an approach to choosing a sensor configuration based on Full Control synthesis. A globally optimal Full Control compensator is computed for each member of a set of sensor configurations which are feasible for the plant. The sensor configuration associated with the Full Control system achieving the best closed-loop performance is chosen for feedback measurements to an output feedback controller. A flexible structure is used as an example to demonstrate this procedure. Experimental results show sensor configurations chosen to optimize the Full Control performance are effective for output feedback controllers.
The Optimal Well Locator ( OWL) program was designed and developed by USEPA to be a screening tool to evaluate and optimize the placement of wells in long term monitoring networks at small sites. The first objective of the OWL program is to allow the user to visualize the change ...
NASA Astrophysics Data System (ADS)
Feyen, Luc; Gorelick, Steven M.
2005-03-01
We propose a framework that combines simulation optimization with Bayesian decision analysis to evaluate the worth of hydraulic conductivity data for optimal groundwater resources management in ecologically sensitive areas. A stochastic simulation optimization management model is employed to plan regionally distributed groundwater pumping while preserving the hydroecological balance in wetland areas. Because predictions made by an aquifer model are uncertain, groundwater supply systems operate below maximum yield. Collecting data from the groundwater system can potentially reduce predictive uncertainty and increase safe water production. The price paid for improvement in water management is the cost of collecting the additional data. Efficient data collection using Bayesian decision analysis proceeds in three stages: (1) The prior analysis determines the optimal pumping scheme and profit from water sales on the basis of known information. (2) The preposterior analysis estimates the optimal measurement locations and evaluates whether each sequential measurement will be cost-effective before it is taken. (3) The posterior analysis then revises the prior optimal pumping scheme and consequent profit, given the new information. Stochastic simulation optimization employing a multiple-realization approach is used to determine the optimal pumping scheme in each of the three stages. The cost of new data must not exceed the expected increase in benefit obtained in optimal groundwater exploitation. An example based on groundwater management practices in Florida aimed at wetland protection showed that the cost of data collection more than paid for itself by enabling a safe and reliable increase in production.
Le Bras, A; Raoult, H; Ferré, J-C; Ronzière, T; Gauvrit, J-Y
2015-06-01
Identifying occlusion location is crucial for determining the optimal therapeutic strategy during the acute phase of ischemic stroke. The purpose of this study was to assess the diagnostic efficacy of MR imaging, including conventional sequences plus time-resolved contrast-enhanced MRA in comparison with DSA for identifying arterial occlusion location. Thirty-two patients with 34 occlusion levels referred for thrombectomy during acute cerebral stroke events were consecutively included from August 2010 to December 2012. Before thrombectomy, we performed 3T MR imaging, including conventional 3D-TOF and gradient-echo T2 sequences, along with time-resolved contrast-enhanced MRA of the extra- and intracranial arteries. The 3D-TOF, gradient-echo T2, and time-resolved contrast-enhanced MRA results were consensually assessed by 2 neuroradiologists and compared with prethrombectomy DSA results in terms of occlusion location. The Wilcoxon test was used for statistical analysis to compare MR imaging sequences with DSA, and the κ coefficient was used to determine intermodality agreement. The occlusion level on the 3D-TOF and gradient-echo T2 images differed significantly from that of DSA (P < .001 and P = .002, respectively), while no significant difference was observed between DSA and time-resolved contrast-enhanced MRA (P = .125). κ coefficients for intermodality agreement with DSA (95% CI, percentage agreement) were 0.43 (0.3%-0.6; 62%), 0.32 (0.2%-0.5; 56%), and 0.81 (0.6%-1.0; 88%) for 3D-TOF, gradient-echo T2, and time-resolved contrast-enhanced MRA, respectively. The time-resolved contrast-enhanced MRA sequence proved reliable for identifying occlusion location in acute stroke with performance superior to that of 3D-TOF and gradient-echo T2 sequences. © 2015 by American Journal of Neuroradiology.
Lotfian, Reza; Najafi, Mehdi
2018-02-26
Background Every year, many mining accidents occur in underground mines all over the world resulting in the death and maiming of many miners and heavy financial losses to mining companies. Underground mining accounts for an increasing share of these events due to their special circumstances and the risks of working therein. Thus, the optimal location of emergency stations within the network of an underground mine in order to provide medical first aid and transport injured people at the right time, plays an essential role in reducing deaths and disabilities caused by accidents Objective The main objective of this study is to determine the location of emergency stations (ES) within the network of an underground coal mine in order to minimize the outreach time for the injured. Methods A three-objective mathematical model is presented for placement of ES facility location selection and allocation of facilities to the injured in various stopes. Results Taking into account the radius of influence for each ES, the proposed model is capable to reduce the maximum time for provision of emergency services in the event of accident for each stope. In addition, the coverage or lack of coverage of each stope by any of the emergency facility is determined by means of Floyd-Warshall algorithm and graph. To solve the problem, a global criterion method using GAMS software is used to evaluate the accuracy and efficiency of the model. Conclusions 7 locations were selected from among 46 candidates for the establishment of emergency facilities in Tabas underground coal mine. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Technical Reports Server (NTRS)
Sechkar, Edward A.; Stueber, Thomas J.; Rutledge, Sharon K.
2000-01-01
Atomic oxygen generated in ground-based research facilities has been used to not only test erosion of candidate spacecraft materials but as a noncontact technique for removing organic deposits from the surfaces of artwork. NASA has patented the use of atomic oxygen to remove carbon-based soot contamination from fire-damaged artwork. The process of cleaning soot-damaged paintings with atomic oxygen requires exposures for variable lengths of time, dependent on the condition of a painting. Care must be exercised while cleaning to prevent the removal of pigment. The cleaning process must be stopped as soon as visual inspection or surface reflectance measurements indicate that cleaning is complete. Both techniques rely on optical comparisons of known bright locations against known dark locations on the artwork being cleaned. Difficulties arise with these techniques when either a known bright or dark location cannot be determined readily. Furthermore, dark locations will lighten with excessive exposure to atomic oxygen. Therefore, an automated test instrument to quantitatively characterize cleaning progression was designed and developed at the NASA Glenn Research Center at Lewis Field to determine when atomic oxygen cleaning is complete.
Use of multilevel modeling for determining optimal parameters of heat supply systems
NASA Astrophysics Data System (ADS)
Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.
2017-07-01
The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in St. Petersburg, the city of Bratsk, and the Magistral'nyi settlement.
The optimal location of piezoelectric actuators and sensors for vibration control of plates
NASA Astrophysics Data System (ADS)
Kumar, K. Ramesh; Narayanan, S.
2007-12-01
This paper considers the optimal placement of collocated piezoelectric actuator-sensor pairs on a thin plate using a model-based linear quadratic regulator (LQR) controller. LQR performance is taken as objective for finding the optimal location of sensor-actuator pairs. The problem is formulated using the finite element method (FEM) as multi-input-multi-output (MIMO) model control. The discrete optimal sensor and actuator location problem is formulated in the framework of a zero-one optimization problem. A genetic algorithm (GA) is used to solve the zero-one optimization problem. Different classical control strategies like direct proportional feedback, constant-gain negative velocity feedback and the LQR optimal control scheme are applied to study the control effectiveness.
Impulsive time-free transfers between halo orbits
NASA Astrophysics Data System (ADS)
Hiday, L. A.; Howell, K. C.
1992-08-01
A methodology is developed to design optimal time-free impulsive transfers between three-dimensional halo orbits in the vicinity of the interior L1 libration point of the sun-earth/moon barycenter system. The transfer trajectories are optimal in the sense that the total characteristics velocity required to implement the transfer exhibits a local minimum. Criteria are established whereby the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The optimality of a reference two-impulse transfer can be determined by examining the slope at the endpoints of a plot of the magnitude of the primer vector on the reference trajectory. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The optimal time of flight on the time-free transfer, and consequently, the departure and arrival locations on the halo orbits are determined by the unconstrained minimization of a function of two variables using a multivariable search technique. Results indicate that the cost can be substantially diminished by the allowance for coasts in the initial and final libration-point orbits.
Impulsive Time-Free Transfers Between Halo Orbits
NASA Astrophysics Data System (ADS)
Hiday-Johnston, L. A.; Howell, K. C.
1996-12-01
A methodology is developed to design optimal time-free impulsive transfers between three-dimensional halo orbits in the vicinity of the interior L 1 libration point of the Sun-Earth/Moon barycenter system. The transfer trajectories are optimal in the sense that the total characteristic velocity required to implement the transfer exhibits a local minimum. Criteria are established whereby the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The optimality of a reference two-impulse transfer can be determined by examining the slope at the endpoints of a plot of the magnitude of the primer vector on the reference trajectory. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The optimal time of flight on the time-free transfer, and consequently, the departure and arrival locations on the halo orbits are determined by the unconstrained minimization of a function of two variables using a multivariable search technique. Results indicate that the cost can be substantially diminished by the allowance for coasts in the initial and final libration-point orbits.
Near-optimal energy transitions for energy-state trajectories of hypersonic aircraft
NASA Technical Reports Server (NTRS)
Ardema, M. D.; Bowles, J. V.; Terjesen, E. J.; Whittaker, T.
1992-01-01
A problem of the instantaneous energy transition that occurs in energy-state approximation is considered. The transitions are modeled as a sequence of two load-factor bounded paths (either climb-dive or dive-climb). The boundary-layer equations associated with the energy-state dynamic model are analyzed to determine the precise location of the transition.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1994-01-01
Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.
Power flow analysis and optimal locations of resistive type superconducting fault current limiters.
Zhang, Xiuchang; Ruiz, Harold S; Geng, Jianzhao; Shen, Boyang; Fu, Lin; Zhang, Heng; Coombs, Tim A
2016-01-01
Based on conventional approaches for the integration of resistive-type superconducting fault current limiters (SFCLs) on electric distribution networks, SFCL models largely rely on the insertion of a step or exponential resistance that is determined by a predefined quenching time. In this paper, we expand the scope of the aforementioned models by considering the actual behaviour of an SFCL in terms of the temperature dynamic power-law dependence between the electrical field and the current density, characteristic of high temperature superconductors. Our results are compared to the step-resistance models for the sake of discussion and clarity of the conclusions. Both SFCL models were integrated into a power system model built based on the UK power standard, to study the impact of these protection strategies on the performance of the overall electricity network. As a representative renewable energy source, a 90 MVA wind farm was considered for the simulations. Three fault conditions were simulated, and the figures for the fault current reduction predicted by both fault current limiting models have been compared in terms of multiple current measuring points and allocation strategies. Consequently, we have shown that the incorporation of the E - J characteristics and thermal properties of the superconductor at the simulation level of electric power systems, is crucial for estimations of reliability and determining the optimal locations of resistive type SFCLs in distributed power networks. Our results may help decision making by distribution network operators regarding investment and promotion of SFCL technologies, as it is possible to determine the maximum number of SFCLs necessary to protect against different fault conditions at multiple locations.
Varga, Peter; Inzana, Jason A; Schwiedrzik, Jakob; Zysset, Philippe K; Gueorguiev, Boyko; Blauth, Michael; Windolf, Markus
2017-05-01
High incidence and increased mortality related to secondary, contralateral proximal femoral fractures may justify invasive prophylactic augmentation that reinforces the osteoporotic proximal femur to reduce fracture risk. Bone cement-based approaches (femoroplasty) may deliver the required strengthening effect; however, the significant variation in the results of previous studies calls for a systematic analysis and optimization of this method. Our hypothesis was that efficient generalized augmentation strategies can be identified via computational optimization. This study investigated, by means of finite element analysis, the effect of cement location and volume on the biomechanical properties of fifteen proximal femora in sideways fall. Novel cement cloud locations were developed using the principles of bone remodeling and compared to the "single central" location that was previously reported to be optimal. The new augmentation strategies provided significantly greater biomechanical benefits compared to the "single central" cement location. Augmenting with approximately 12ml of cement in the newly identified location achieved increases of 11% in stiffness, 64% in yield force, 156% in yield energy and 59% in maximum force, on average, compared to the non-augmented state. The weaker bones experienced a greater biomechanical benefit from augmentation than stronger bones. The effect of cement volume on the biomechanical properties was approximately linear. Results of the "single central" model showed good agreement with previous experimental studies. These findings indicate enhanced potential of cement-based prophylactic augmentation using the newly developed cementing strategy. Future studies should determine the required level of strengthening and confirm these numerical results experimentally. Copyright © 2017 Elsevier Ltd. All rights reserved.
Piezoelectric actuation of helicopter rotor blades
NASA Astrophysics Data System (ADS)
Lieven, Nicholas A. J.
2001-07-01
The work presented in this paper is concerned with the application of embedded piezo-electric actuators in model helicopter rotor blades. The paper outlines techniques to define the optimal location of actuators to excite particular modes of vibration whilst the blade is rotating. Using composite blades the distribution of strain energy is defined using a Finite Element model with imposed rotor-dynamic and aerodynamics loads. The loads are specified through strip theory to determine the position of maximum bending moment and thus the optimal location of the embedded actuators. The effectiveness of the technique is demonstrated on a 1/4 scale fixed cyclic pitch rotor head. Measurement of the blade displacement is achieved by using strain gauges. In addition a redundant piezo-electric actuator is used to measure the blades' response characteristics. The addition of piezo-electric devices in this application has been shown to exhibit adverse aeroelastic effects, such as counter mass balancing and increased drag. Methods to minimise these effects are suggested. The outcome of the paper is a method for defining the location and orientation of piezo-electric devices in rotor-dynamic applications.
NASA Astrophysics Data System (ADS)
Elbanna, Ahmed; Peetz, Darin
Bone is classically considered to be a self-optimizing structure in accordance with Wolff's law. However, while the structure's ability to adapt to changing stress patterns has been well documented, whether it is fully optimal for compliance is less certain (Sigmund, 2002). Given the complexity of many biological systems, it is expected that this structure serves several purposes. We present a multi-objective topology optimization formulation for trabecular bone in the human body at two locations: the vertebrae and the femur. We account for the effect of different conflicting objectives such as maximization of stiffness, maximization of surface area, and minimization of buckling susceptibility. Our formulation enables us to determine the relative role of each of these objective in optimizing the structure. Moreover, it provides an opportunity to explore what structural features have to evolve to meet a certain objective requirements that may have been absent otherwise. For example, inclusion of stability considerations introduce numerous horizontal and diagonal members in the topology in the case of human vertebrae under vertical loading. However, the stability is found to play a lesser role in the case of the femur bone optimization. Our formulation enables investigation of bone adaptation at different locations of the body as well as under different loading and boundary conditions (e.g. healthy and diseased discs for the case of the spine). We discuss the implications of our findings on developing design rules for bio-inspired and bio-mimetic architectured materials. National Science Foundation: CMMI.
Optimal deployment of thermal energy storage under diverse economic and climate conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeForest, Nicholas; Mendes, Gonçalo; Stadler, Michael
2014-04-01
This paper presents an investigation of the economic benefit of thermal energy storage (TES) for cooling, across a range of economic and climate conditions. Chilled water TES systems are simulated for a large office building in four distinct locations, Miami in the U.S.; Lisbon, Portugal; Shanghai, China; and Mumbai, India. Optimal system size and operating schedules are determined using the optimization model DER-CAM, such that total cost, including electricity and amortized capital costs are minimized. The economic impacts of each optimized TES system is then compared to systems sized using a simple heuristic method, which bases system size as fractionmore » (50percent and 100percent) of total on-peak summer cooling loads. Results indicate that TES systems of all sizes can be effective in reducing annual electricity costs (5percent-15percent) and peak electricity consumption (13percent-33percent). The investigation also indentifies a number of criteria which drive TES investment, including low capital costs, electricity tariffs with high power demand charges and prolonged cooling seasons. In locations where these drivers clearly exist, the heuristically sized systems capture much of the value of optimally sized systems; between 60percent and 100percent in terms of net present value. However, in instances where these drivers are less pronounced, the heuristic tends to oversize systems, and optimization becomes crucial to ensure economically beneficial deployment of TES, increasing the net present value of heuristically sized systems by as much as 10 times in some instances.« less
Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation
NASA Astrophysics Data System (ADS)
Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari
2016-07-01
In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.
Duodenal and jejunal Dieulafoy’s lesions: optimal management
Yılmaz, Tonguç Utku; Kozan, Ramazan
2017-01-01
Dieulafoy’s lesions (DLs) are rare and cause gastrointestinal bleeding resulting from erosion of dilated submucosal vessels. The most common location for DL is the stomach, followed by duodenum. There is little information about duodenal and jejunal DLs. Challenges for diagnosis and treatment of Dieulafoy’s lesions include the rare nature of the disease, asymptomatic patients, bleeding symptoms often requiring rapid diagnosis and treatment in symptomatic patients, variability in the diagnosis and treatment methods resulting from different lesion locations, and the risk of re-bleeding. For these reasons, there is no universal consensus about the diagnosis and treatment approach. There are few published case reports and case series recently published. Most duodenal DLs are not evaluated seperately in the studies, which makes it difficult to determine the optimal model. In this study, we summarize the general aspects and recent approaches used to treat duodenal DL. PMID:29158686
NASA Astrophysics Data System (ADS)
Masternak, Tadeusz J.
This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.
Bayesian Spatial Design of Optimal Deep Tubewell Locations in Matlab, Bangladesh.
Warren, Joshua L; Perez-Heydrich, Carolina; Yunus, Mohammad
2013-09-01
We introduce a method for statistically identifying the optimal locations of deep tubewells (dtws) to be installed in Matlab, Bangladesh. Dtw installations serve to mitigate exposure to naturally occurring arsenic found at groundwater depths less than 200 meters, a serious environmental health threat for the population of Bangladesh. We introduce an objective function, which incorporates both arsenic level and nearest town population size, to identify optimal locations for dtw placement. Assuming complete knowledge of the arsenic surface, we then demonstrate how minimizing the objective function over a domain favors dtws placed in areas with high arsenic values and close to largely populated regions. Given only a partial realization of the arsenic surface over a domain, we use a Bayesian spatial statistical model to predict the full arsenic surface and estimate the optimal dtw locations. The uncertainty associated with these estimated locations is correctly characterized as well. The new method is applied to a dataset from a village in Matlab and the estimated optimal locations are analyzed along with their respective 95% credible regions.
Design, optimization, and analysis of a self-deploying PV tent array
NASA Astrophysics Data System (ADS)
Collozza, Anthony J.
1991-06-01
A tent shaped PV array was designed and the design was optimized for maximum specific power. In order to minimize output power variation a tent angle of 60 deg was chosen. Based on the chosen tent angle an array structure was designed. The design considerations were minimal deployment time, high reliability, and small stowage volume. To meet these considerations the array was chosen to be self-deployable, form a compact storage configuration, using a passive pressurized gas deployment mechanism. Each structural component of the design was analyzed to determine the size necessary to withstand the various forces to which it would be subjected. Through this analysis the component weights were determined. An optimization was performed to determine the array dimensions and blanket geometry which produce the maximum specific power for a given PV blanket. This optimization was performed for both lunar and Martian environmental conditions. Other factors such as PV blanket types, structural material, and wind velocity (for Mars array), were varied to determine what influence they had on the design point. The performance specifications for the array at both locations and with each type of PV blanket were determined. These specifications were calculated using the Arimid fiber composite as the structural material. The four PV blanket types considered were silicon, GaAs/Ge, GaAsCLEFT, and amorphous silicon. The specifications used for each blanket represented either present day or near term technology. For both the Moon and Mars the amorphous silicon arrays produced the highest specific power.
NASA Astrophysics Data System (ADS)
Walsh, Braden; Jolly, Arthur; Procter, Jonathan
2017-04-01
Using active seismic sources on Tongariro Volcano, New Zealand, the amplitude source location (ASL) method is calibrated and optimized through a series of sensitivity tests. By applying a geologic medium velocity of 1500 m/s and an attenuation value of Q=60 for surface waves along with amplification factors computed from regional earthquakes, the ASL produced location discrepancies larger than 1.0 km horizontally and up to 0.5 km in depth. Through the use of sensitivity tests on input parameters, we show that velocity and attenuation models have moderate to strong influences on the location results, but can be easily constrained. Changes in locations are accommodated through either lateral or depth movements. Station corrections (amplification factors) and station geometry strongly affect the ASL locations laterally, horizontally and in depth. Calibrating the amplification factors through the exploitation of the active seismic source events reduced location errors for the sources by up to 50%.
Optimizing Site Locations for Determining Shape from Photometric Light Curves
2009-09-01
discretize t satellite sh there be a c telescopes Space situa the operati – as well a (Codified d in order to satellites in the degrad photometri ...Departm ller or too far a used to charac llite varies as the solar phase pe. One way ver time or by e following qu l sensor locati the satellite fr...four satellite element sets used in this study and the six time periods in which we determined passes of various orientations to the terminator. Thus
NASA Astrophysics Data System (ADS)
Hao, Qichen; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Huang, Linxian
2018-05-01
An optimization approach is used for the operation of groundwater artificial recharge systems in an alluvial fan in Beijing, China. The optimization model incorporates a transient groundwater flow model, which allows for simulation of the groundwater response to artificial recharge. The facilities' operation with regard to recharge rates is formulated as a nonlinear programming problem to maximize the volume of surface water recharged into the aquifers under specific constraints. This optimization problem is solved by the parallel genetic algorithm (PGA) based on OpenMP, which could substantially reduce the computation time. To solve the PGA with constraints, the multiplicative penalty method is applied. In addition, the facilities' locations are implicitly determined on the basis of the results of the recharge-rate optimizations. Two scenarios are optimized and the optimal results indicate that the amount of water recharged into the aquifers will increase without exceeding the upper limits of the groundwater levels. Optimal operation of this artificial recharge system can also contribute to the more effective recovery of the groundwater storage capacity.
Optimal Low Energy Earth-Moon Transfers
NASA Technical Reports Server (NTRS)
Griesemer, Paul Ricord; Ocampo, Cesar; Cooley, D. S.
2010-01-01
The optimality of a low-energy Earth-Moon transfer is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the ballistic lunar capture trajectory is examined to determine whether one or more additional impulses may improve on the cost of the transfer.
Optimal design application on the advanced aeroelastic rotor blade
NASA Technical Reports Server (NTRS)
Wei, F. S.; Jones, R.
1985-01-01
The vibration and performance optimization procedure using regression analysis was successfully applied to an advanced aeroelastic blade design study. The major advantage of this regression technique is that multiple optimizations can be performed to evaluate the effects of various objective functions and constraint functions. The data bases obtained from the rotorcraft flight simulation program C81 and Myklestad mode shape program are analytically determined as a function of each design variable. This approach has been verified for various blade radial ballast weight locations and blade planforms. This method can also be utilized to ascertain the effect of a particular cost function which is composed of several objective functions with different weighting factors for various mission requirements without any additional effort.
NASA Astrophysics Data System (ADS)
Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai
2017-07-01
Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.
Ideker, R E; Bandura, J P; Larsen, R A; Cox, J W; Keller, F W; Brody, D A
1975-01-01
Location of the equivalent cardiac dipole has been estimated but not fully verified in several laboratories. To test the accuracy of such a procedure, injury vectors were produced in 14 isolated, perfused rabbit hearts by epicardial searing. Strongly dipolar excitation fronts were produced in 6 additional hearts by left ventricular pacing. Twenty computer-processed signals, derived from surface electrodes on a spherical electrolyte-filled tank containing the test preparation, were optimally fitted with a locatable cardiac dipole that accounted for over 99% of the root-mean-square surface potential. For the 14 burns (mean radius 5.0 mm), the S-T injury dipole was located 3.4 plus or minus 0.7 (SD) mm from the burn center. For the 6 paced hearts, the dipole early in the ectopic beat was located 3.7 mm (range 2.6 to 4.6 mm) from the stimulating electrode. Phase inhomogeneities within the chamber appeared to have a small but predictable effect on dipole site determination. The study demonstrates that equivalent dipole location can be determined with acceptable accuracy from potential measurements of the external cardiac field.
NASA Astrophysics Data System (ADS)
Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai
2018-07-01
Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.
Niskanen, Eini; Julkunen, Petro; Säisänen, Laura; Vanninen, Ritva; Karjalainen, Pasi; Könönen, Mervi
2010-08-01
Navigated transcranial magnetic stimulation (TMS) can be used to stimulate functional cortical areas at precise anatomical location to induce measurable responses. The stimulation has commonly been focused on anatomically predefined motor areas: TMS of that area elicits a measurable muscle response, the motor evoked potential. In clinical pathologies, however, the well-known homunculus somatotopy theory may not be straightforward, and the representation area of the muscle is not fixed. Traditionally, the anatomical locations of TMS stimulations have not been reported at the group level in standard space. This study describes a methodology for group-level analysis by investigating the normal representation areas of thenar and anterior tibial muscle in the primary motor cortex. The optimal representation area for these muscles was mapped in 59 healthy right-handed subjects using navigated TMS. The coordinates of the optimal stimulation sites were then normalized into standard space to determine the representation areas of these muscles at the group-level in healthy subjects. Furthermore, 95% confidence interval ellipsoids were fitted into the optimal stimulation site clusters to define the variation between subjects in optimal stimulation sites. The variation was found to be highest in the anteroposterior direction along the superior margin of the precentral gyrus. These results provide important normative information for clinical studies assessing changes in the functional cortical areas because of plasticity of the brain. Furthermore, it is proposed that the presented methodology to study TMS locations at the group level on standard space will be a suitable tool for research purposes in population studies. 2010 Wiley-Liss, Inc.
Comparing population and incident data for optimal air ambulance base locations in Norway.
Røislien, Jo; van den Berg, Pieter L; Lindner, Thomas; Zakariassen, Erik; Uleberg, Oddvar; Aardal, Karen; van Essen, J Theresia
2018-05-24
Helicopter emergency medical services are important in many health care systems. Norway has a nationwide physician manned air ambulance service servicing a country with large geographical variations in population density and incident frequencies. The aim of the study was to compare optimal air ambulance base locations using both population and incident data. We used municipality population and incident data for Norway from 2015. The 428 municipalities had a median (5-95 percentile) of 4675 (940-36,264) inhabitants and 10 (2-38) incidents. Optimal helicopter base locations were estimated using the Maximal Covering Location Problem (MCLP) optimization model, exploring the number and location of bases needed to cover various fractions of the population for time thresholds 30 and 45 min, in green field scenarios and conditioned on the existing base structure. The existing bases covered 96.90% of the population and 91.86% of the incidents for time threshold 45 min. Correlation between municipality population and incident frequencies was -0.0027, and optimal base locations varied markedly between the two data types, particularly when lowering the target time. The optimal solution using population density data put focus on the greater Oslo area, where one third of Norwegians live, while using incident data put focus on low population high incident areas, such as northern Norway and winter sport resorts. Using population density data as a proxy for incident frequency is not recommended, as the two data types lead to different optimal base locations. Lowering the target time increases the sensitivity to choice of data.
Optimal location of radiation therapy centers with respect to geographic access.
Santibáñez, Pablo; Gaudet, Marc; French, John; Liu, Emma; Tyldesley, Scott
2014-07-15
To develop a framework with which to evaluate locations of radiation therapy (RT) centers in a region based on geographic access. Patient records were obtained for all external beam radiation therapy started in 2011 for the province of British Columbia, Canada. Two metrics of geographic access were defined. The primary analysis was percentage of patients (coverage) within a 90-minute drive from an RT center (C90), and the secondary analysis was the average drive time (ADT) to an RT center. An integer programming model was developed to determine optimal center locations, catchment areas, and capacity required under different scenarios. Records consisted of 11,096 courses of radiation corresponding to 161,616 fractions. Baseline geographic access was estimated at 102.5 minutes ADT (each way, per fraction) and 75.9% C90. Adding 2 and 3 new centers increased C90 to 88% and 92%, respectively, and decreased ADT by between 43% and 61%, respectively. A scenario in which RT was provided in every potential location that could support at least 1 fully utilized linear accelerator resulted in 35.3 minutes' ADT and 93.6% C90. The proposed framework and model provide a data-driven means to quantitatively evaluate alternative configurations of a regional RT system. Results suggest that the choice of location for future centers can significantly improve geographic access to RT. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Ouweneel, Dagmar M; Sjauw, Krischan D; Wiegerinck, Esther M A; Hirsch, Alexander; Baan, Jan; de Mol, Bas A J M; Lagrand, Wim K; Planken, R Nils; Henriques, José P S
2016-10-01
The use of intracardiac assist devices is expanding, and correct position of these devices is required for optimal functioning. The aortic valve is an important landmark for positioning of those devices. It would be of great value if the device position could be easily monitored on plain supine chest radiograph in the ICU. We introduce a ratio-based tool for determination of the aortic valve location on plain supine chest radiograph images, which can be used to evaluate intracardiac device position. Retrospective observational study. Large academic medical center. Patients admitted to the ICU and supported by an intracardiac assist device. We developed a ratio to determine the aortic valve location on supine chest radiograph images. This ratio is used to assess the position of a cardiac assist device and is compared with echocardiographic findings. Supine anterior-posterior chest radiographs of patients with an aortic valve prosthesis (n = 473) were analyzed to determine the location of the aortic valve. We calculated several ratios with the potential to determine the position of the aortic valve. The aortic valve location ratio, defined as the distance between the carina and the aortic valve, divided by the thoracic width, was found to be the best performing ratio. The aortic valve location ratio determines the location of the aortic valve caudal to the carina, at a distance of 0.25 ± 0.05 times the thoracic width for male patients and 0.28 ± 0.05 times the thoracic width for female patients. The aortic valve location ratio was validated using CT images of patients with angina pectoris without known valvular disease (n = 95). There was a good correlation between cardiac device position (Impella) assessed with the aortic valve location ratio and with echocardiography (n = 53). The aortic valve location ratio enables accurate and reproducible localization of the aortic valve on supine chest radiograph. This tool is easily applicable and can be used for assessment of cardiac device position in patients on the ICU.
Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
BILISOLY, ROGER L.; MCKENNA, SEAN A.
2003-01-01
Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues ofmore » the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no-further-action decisions is site specific and cannot be calculated prior to the sampling. It may be advantageous to use the reduction in MPEV as a stopping rule for systematic sampling across the site that can then be followed by focused sampling in areas identified has having UXO during the systematic sampling. The techniques presented here provide answers to the questions of ''Where to sample?'' and ''When to stop?'' and are capable of running in near real time to support iterative site characterization campaigns.« less
Optimal Sampling to Provide User-Specific Climate Information.
NASA Astrophysics Data System (ADS)
Panturat, Suwanna
The types of weather-related world problems which are of socio-economic importance selected in this study as representative of three different levels of user groups include: (i) a regional problem concerned with air pollution plumes which lead to acid rain in the north eastern United States, (ii) a state-level problem in the form of winter wheat production in Oklahoma, and (iii) an individual-level problem involving reservoir management given errors in rainfall estimation at Lake Ellsworth, upstream from Lawton, Oklahoma. The study is aimed at designing optimal sampling networks which are based on customer value systems and also abstracting from data sets that information which is most cost-effective in reducing the climate-sensitive aspects of a given user problem. Three process models being used in this study to interpret climate variability in terms of the variables of importance to the user comprise: (i) the HEFFTER-SAMSON diffusion model as the climate transfer function for acid rain, (ii) the CERES-MAIZE plant process model for winter wheat production and (iii) the AGEHYD streamflow model selected as "a black box" for reservoir management. A state-of-the-art Non Linear Program (NLP) algorithm for minimizing an objective function is employed to determine the optimal number and location of various sensors. Statistical quantities considered in determining sensor locations including Bayes Risk, the chi-squared value, the probability of the Type I error (alpha) and the probability of the Type II error (beta) and the noncentrality parameter delta^2. Moreover, the number of years required to detect a climate change resulting in a given bushel per acre change in mean wheat production is determined; the number of seasons of observations required to reduce the standard deviation of the error variance of the ambient sulfur dioxide to less than a certain percent of the mean is found; and finally the policy of maintaining pre-storm flood pools at selected levels is examined given information from the optimal sampling network as defined by the study.
Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A
2018-05-28
To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.
Conceptual design and multidisciplinary optimization of in-plane morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku; Sanders, Brian P.; Joo, James J.
2006-03-01
In this paper, the topology optimization methodology for the synthesis of distributed actuation system with specific applications to the morphing air vehicle is discussed. The main emphasis is placed on the topology optimization problem formulations and the development of computational modeling concepts. For demonstration purposes, the inplane morphing wing model is presented. The analysis model is developed to meet several important criteria: It must allow large rigid-body displacements, as well as variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Preliminary work has indicated that addressed modeling concept meets the criteria and may be suitable for the purpose. Topology optimization is performed on the ground structure based on this modeling concept with design variables that control the system configuration. In other words, states of each element in the model are design variables and they are to be determined through optimization process. In effect, the optimization process assigns morphing members as 'soft' elements, non-morphing load-bearing members as 'stiff' elements, and non-existent members as 'voids.' In addition, the optimization process determines the location and relative force intensities of distributed actuators, which is represented computationally as equal and opposite nodal forces with soft axial stiffness. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of formulation itself. Sample in-plane morphing problems are solved to demonstrate the potential capability of the methodology introduced in this paper.
ERIC Educational Resources Information Center
Wolford, George
Seven experiments were run to determine the precise nature of some of the variables which affect the processing of short-term visual information. In particular, retinal location, report order, processing order, lateral masking, and redundancy were studied along with the nature of the confusion errors which are made in the full report procedure.…
Effects of trap design and placement on capture of emerald ash borer, Agrilus planipennis
Joseph A. Francese; Jason B. Oliver; Ivich Fraser; Nadeer Youssef; David R. Lance; Damon J. Crook; Victor C. Mastro
2007-01-01
The ongoing objective of this research is to develop a trap that can improve the sensitivity and efficiency of emerald ash borer, Agrilus planipennis (Coleoptera: Buprestidae) Fairmaire (EAB) survey and aid the overall program in achieving its goals. As part of this work, we sought to determine the optimal location for trap placement. First we placed...
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B
2015-10-06
Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.
Exposure Time Optimization for Highly Dynamic Star Trackers
Wei, Xinguo; Tan, Wei; Li, Jian; Zhang, Guangjun
2014-01-01
Under highly dynamic conditions, the star-spots on the image sensor of a star tracker move across many pixels during the exposure time, which will reduce star detection sensitivity and increase star location errors. However, this kind of effect can be compensated well by setting an appropriate exposure time. This paper focuses on how exposure time affects the star tracker under highly dynamic conditions and how to determine the most appropriate exposure time for this case. Firstly, the effect of exposure time on star detection sensitivity is analyzed by establishing the dynamic star-spot imaging model. Then the star location error is deduced based on the error analysis of the sub-pixel centroiding algorithm. Combining these analyses, the effect of exposure time on attitude accuracy is finally determined. Some simulations are carried out to validate these effects, and the results show that there are different optimal exposure times for different angular velocities of a star tracker with a given configuration. In addition, the results of night sky experiments using a real star tracker agree with the simulation results. The summarized regularities in this paper should prove helpful in the system design and dynamic performance evaluation of the highly dynamic star trackers. PMID:24618776
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.
2016-01-01
Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978
Towards a Normalised 3D Geovisualisation: The Viewpoint Management
NASA Astrophysics Data System (ADS)
Neuville, R.; Poux, F.; Hallot, P.; Billen, R.
2016-10-01
This paper deals with the viewpoint management in 3D environments considering an allocentric environment. The recent advances in computer sciences and the growing number of affordable remote sensors lead to impressive improvements in the 3D visualisation. Despite some research relating to the analysis of visual variables used in 3D environments, we notice that it lacks a real standardisation of 3D representation rules. In this paper we study the "viewpoint" as being the first considered parameter for a normalised visualisation of 3D data. Unlike in a 2D environment, the viewing direction is not only fixed in a top down direction in 3D. A non-optimal camera location means a poor 3D representation in terms of relayed information. Based on this statement we propose a model based on the analysis of the computational display pixels that determines a viewpoint maximising the relayed information according to one kind of query. We developed an OpenGL prototype working on screen pixels that allows to determine the optimal camera location based on a screen pixels colour algorithm. The viewpoint management constitutes a first step towards a normalised 3D geovisualisation.
Lee, Minhyun; Koo, Choongwan; Hong, Taehoon; Park, Hyo Seon
2014-04-15
For the effective photovoltaic (PV) system, it is necessary to accurately determine the monthly average daily solar radiation (MADSR) and to develop an accurate MADSR map, which can simplify the decision-making process for selecting the suitable location of the PV system installation. Therefore, this study aimed to develop a framework for the mapping of the MADSR using an advanced case-based reasoning (CBR) and a geostatistical technique. The proposed framework consists of the following procedures: (i) the geographic scope for the mapping of the MADSR is set, and the measured MADSR and meteorological data in the geographic scope are collected; (ii) using the collected data, the advanced CBR model is developed; (iii) using the advanced CBR model, the MADSR at unmeasured locations is estimated; and (iv) by applying the measured and estimated MADSR data to the geographic information system, the MADSR map is developed. A practical validation was conducted by applying the proposed framework to South Korea. It was determined that the MADSR map developed through the proposed framework has been improved in terms of accuracy. The developed MADSR map can be used for estimating the MADSR at unmeasured locations and for determining the optimal location for the PV system installation.
Optimization of a Lunar Pallet Lander Reinforcement Structure Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Burt, Adam
2014-01-01
In this paper, a unique system level spacecraft design optimization will be presented. A Genetic Algorithm is used to design the global pattern of the reinforcing structure, while a gradient routine is used to adequately stiffen the sub-structure. The system level structural design includes determining the optimal physical location (and number) of reinforcing beams of a lunar pallet lander deck structure. Design of the substructure includes determining placement of secondary stiffeners and the number of rivets required for assembly.. In this optimization, several considerations are taken into account. The primary objective was to raise the primary natural frequencies of the structure such that the Pallet Lander primary structure does not significantly couple with the launch vehicle. A secondary objective is to determine how to properly stiffen the reinforcing beams so that the beam web resists the shear buckling load imparted by the spacecraft components mounted to the pallet lander deck during launch and landing. A third objective is that the calculated stress does not exceed the allowable strength of the material. These design requirements must be met while, minimizing the overall mass of the spacecraft. The final paper will discuss how the optimization was implemented as well as the results. While driven by optimization algorithms, the primary purpose of this effort was to demonstrate the capability of genetic algorithms to enable design automation in the preliminary design cycle. By developing a routine that can automatically generate designs through the use of Finite Element Analysis, considerable design efficiencies, both in time and overall product, can be obtained over more traditional brute force design methods.
A Framework for Optimizing the Placement of Tidal Turbines
NASA Astrophysics Data System (ADS)
Nelson, K. S.; Roberts, J.; Jones, C.; James, S. C.
2013-12-01
Power generation with marine hydrokinetic (MHK) current energy converters (CECs), often in the form of underwater turbines, is receiving growing global interest. Because of reasonable investment, maintenance, reliability, and environmental friendliness, this technology can contribute to national (and global) energy markets and is worthy of research investment. Furthermore, in remote areas, small-scale MHK energy from river, tidal, or ocean currents can provide a local power supply. However, little is known about the potential environmental effects of CEC operation in coastal embayments, estuaries, or rivers, or of the cumulative impacts of these devices on aquatic ecosystems over years or decades of operation. There is an urgent need for practical, accessible tools and peer-reviewed publications to help industry and regulators evaluate environmental impacts and mitigation measures, while establishing best sitting and design practices. Sandia National Laboratories (SNL) and Sea Engineering, Inc. (SEI) have investigated the potential environmental impacts and performance of individual tidal energy converters (TECs) in Cobscook Bay, ME; TECs are a subset of CECs that are specifically deployed in tidal channels. Cobscook Bay is the first deployment location of Ocean Renewable Power Company's (ORPC) TidGenTM unit. One unit is currently in place with four more to follow. Together, SNL and SEI built a coarse-grid, regional-scale model that included Cobscook Bay and all other landward embayments using the modeling platform SNL-EFDC. Within SNL-EFDC tidal turbines are represented using a unique set of momentum extraction, turbulence generation, and turbulence dissipation equations at TEC locations. The global model was then coupled to a local-scale model that was centered on the proposed TEC deployment locations. An optimization frame work was developed that used the refined model to determine optimal device placement locations that maximized array performance. Within the framework, environmental effects are considered to minimize the possibility of altering flows to an extent that would affect fish-swimming behavior and sediment-transport trends. Simulation results were compared between model runs with the optimized array configuration, and the originally purposed deployment locations; the optimized array showed a 17% increase in power generation. The developed framework can provide regulators and developers with a tool for assessing environmental impacts and device-performance parameters for the deployment of MHK devices. The more thoroughly understood this promising technology, the more likely it will become a viable source of alternative energy.
The Researches on Damage Detection Method for Truss Structures
NASA Astrophysics Data System (ADS)
Wang, Meng Hong; Cao, Xiao Nan
2018-06-01
This paper presents an effective method to detect damage in truss structures. Numerical simulation and experimental analysis were carried out on a damaged truss structure under instantaneous excitation. The ideal excitation point and appropriate hammering method were determined to extract time domain signals under two working conditions. The frequency response function and principal component analysis were used for data processing, and the angle between the frequency response function vectors was selected as a damage index to ascertain the location of a damaged bar in the truss structure. In the numerical simulation, the time domain signal of all nodes was extracted to determine the location of the damaged bar. In the experimental analysis, the time domain signal of a portion of the nodes was extracted on the basis of an optimal sensor placement method based on the node strain energy coefficient. The results of the numerical simulation and experimental analysis showed that the damage detection method based on the frequency response function and principal component analysis could locate the damaged bar accurately.
Hastings, Mary K; Mueller, Michael J; Pilgram, Thomas K; Lott, Donovan J; Commean, Paul K; Johnson, Jeffrey E
2007-01-01
Standard prevention and treatment strategies to decrease peak plantar pressure include a total contact insert with a metatarsal pad, but no clear guidelines exist to determine optimal placement of the pad with respect to the metatarsal head. The purpose of this study was to determine the effect of metatarsal pad location on peak plantar pressure in subjects with diabetes mellitus and peripheral neuropathy. Twenty subjects with diabetes mellitus, peripheral neuropathy, and a history of forefoot plantar ulcers were studied (12 men and eight women, mean age=57+/-9 years). CT determined the position of the metatarsal pad relative to metatarsal head and peak plantar pressures were measured on subjects in three footwear conditions: extra-depth shoes and a 1) total contact insert, 2) total contact insert and a proximal metatarsal pad, and 3) total contact insert and a distal metatarsal pad. The change in peak plantar pressure between shoe conditions was plotted and compared to metatarsal pad position relative to the second metatarsal head. Compared to the total contact insert, all metatarsal pad placements between 6.1 mm to 10.6 mm proximal to the metatarsal head line resulted in a pressure reduction (average reduction=32+/-16%). Metatarsal pad placements between 1.8 mm distal and 6.1 mm proximal and between 10.6 mm proximal and 16.8 mm proximal to the metatarsal head line resulted in variable peak plantar pressure reduction (average reduction=16+/-21%). Peak plantar pressure increased when the metatarsal pad was located more than 1.8 mm distal to the metatarsal head line. Consistent peak plantar pressure reduction occurred when the metatarsal pad in this study was located between 6 to 11 mm proximal to the metatarsal head line. Pressure reduction lessened as the metatarsal pad moved outside of this range and actually increased if the pad was located too distal of this range. Computational models are needed to help predict optimal location of metatarsal pad with a variety of sizes, shapes, and material properties.
Mandel, Jacob E; Morel-Ovalle, Louis; Boas, Franz E; Ziv, Etay; Yarmohammadi, Hooman; Deipolyi, Amy; Mohabir, Heeralall R; Erinjeri, Joseph P
2018-02-20
The purpose of this study is to determine whether a custom Google Maps application can optimize site selection when scheduling outpatient interventional radiology (IR) procedures within a multi-site hospital system. The Google Maps for Business Application Programming Interface (API) was used to develop an internal web application that uses real-time traffic data to determine estimated travel time (ETT; minutes) and estimated travel distance (ETD; miles) from a patient's home to each a nearby IR facility in our hospital system. Hypothetical patient home addresses based on the 33 cities comprising our institution's catchment area were used to determine the optimal IR site for hypothetical patients traveling from each city based on real-time traffic conditions. For 10/33 (30%) cities, there was discordance between the optimal IR site based on ETT and the optimal IR site based on ETD at non-rush hour time or rush hour time. By choosing to travel to an IR site based on ETT rather than ETD, patients from discordant cities were predicted to save an average of 7.29 min during non-rush hour (p = 0.03), and 28.80 min during rush hour (p < 0.001). Using a custom Google Maps application to schedule outpatients for IR procedures can effectively reduce patient travel time when more than one location providing IR procedures is available within the same hospital system.
Design and Optimization of Low-thrust Orbit Transfers Using Q-law and Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Lee, Seungwon; vonAllmen, Paul; Fink, Wolfgang; Petropoulos, Anastassios; Terrile, Richard
2005-01-01
Future space missions will depend more on low-thrust propulsion (such as ion engines) thanks to its high specific impulse. Yet, the design of low-thrust trajectories is complex and challenging. Third-body perturbations often dominate the thrust, and a significant change to the orbit requires a long duration of thrust. In order to guide the early design phases, we have developed an efficient and efficacious method to obtain approximate propellant and flight-time requirements (i.e., the Pareto front) for orbit transfers. A search for the Pareto-optimal trajectories is done in two levels: optimal thrust angles and locations are determined by Q-law, while the Q-law is optimized with two evolutionary algorithms: a genetic algorithm and a simulated-annealing-related algorithm. The examples considered are several types of orbit transfers around the Earth and the asteroid Vesta.
Gandhi, Saumil J; Liang, Xing; Ding, Xuanfeng; Zhu, Timothy C; Ben-Josef, Edgar; Plastaras, John P; Metz, James M; Both, Stefan; Apisarnthanarax, Smith
2015-01-01
Stereotactic body radiation therapy (SBRT) for treatment of liver tumors is often limited by liver dose constraints. Protons offer potential for more liver sparing, but clinical situations in which protons may be superior to photons are not well described. We developed and validated a treatment decision model to determine whether liver tumors of certain sizes and locations are more suited for photon versus proton SBRT. Six spherical mock tumors from 1 to 6 cm in diameter were contoured on computed tomography images of 1 patient at 4 locations: dome, caudal, left medial, and central. Photon and proton plans were generated to deliver 50 Gy in 5 fractions to each tumor and optimized to deliver equivalent target coverage and maximal liver sparing. Using these plans, we developed a hypothesis-generating model to predict the optimal modality for maximal liver sparing based on tumor size and location. We then validated this model in 10 patients with liver tumors. Protons spared significantly more liver than photons for dome or central tumors ≥3 cm (dome: 134 ± 21 cm(3), P = .03; central: 108 ± 4 cm(3), P = .01). Our model correctly predicted the optimal SBRT modality for all 10 patients. For patients with dome or central tumors ≥3 cm, protons significantly increased the volume of liver spared (176 ± 21 cm(3), P = .01) and decreased the mean liver dose (8.4 vs 12.2 Gy, P = .01) while offering no significant advantage for tumors <3 cm at any location or for caudal and left medial tumors of any size. When feasible, protons should be considered as the radiation modality of choice for dome and central tumors >3 cm to allow maximal liver sparing and potentially reduce radiation toxicity. Protons should also be considered for any tumor >5 cm if photon plans fail to achieve adequate coverage or exceed the mean liver threshold. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Urban Forest Ecosystem Service Optimization, Tradeoffs, and Disparities
NASA Astrophysics Data System (ADS)
Bodnaruk, E.; Kroll, C. N.; Endreny, T. A.; Hirabayashi, S.; Yang, Y.
2014-12-01
Urban land area and the proportion of humanity living in cities is growing, leading to increased urban air pollution, temperature, and stormwater runoff. These changes can exacerbate respiratory and heat-related illnesses and affect ecosystem functioning. Urban trees can help mitigate these threats by removing air pollutants, mitigating urban heat island effects, and infiltrating and filtering stormwater. The urban environment is highly heterogeneous, and there is no tool to determine optimal locations to plant or protect trees. Using spatially explicit land cover, weather, and demographic data within biophysical ecosystem service models, this research expands upon the iTree urban forest tools to produce a new decision support tool (iTree-DST) that will explore the development and impacts of optimal tree planting. It will also heighten awareness of environmental justice by incorporating the Atkinson Index to quantify disparities in health risks and ecosystem services across vulnerable and susceptible populations. The study area is Baltimore City, a location whose urban forest and environmental justice concerns have been studied extensively. The iTree-DST is run at the US Census block group level and utilizes a local gradient approach to calculate the change in ecosystem services with changing tree cover across the study area. Empirical fits provide ecosystem service gradients for possible tree cover scenarios, greatly increasing the speed and efficiency of the optimization procedure. Initial results include an evaluation of the performance of the gradient method, optimal planting schemes for individual ecosystem services, and an analysis of tradeoffs and synergies between competing objectives.
NASA Astrophysics Data System (ADS)
Vecherin, Sergey N.; Wilson, D. Keith; Pettit, Chris L.
2010-04-01
Determination of an optimal configuration (numbers, types, and locations) of a sensor network is an important practical problem. In most applications, complex signal propagation effects and inhomogeneous coverage preferences lead to an optimal solution that is highly irregular and nonintuitive. The general optimization problem can be strictly formulated as a binary linear programming problem. Due to the combinatorial nature of this problem, however, its strict solution requires significant computational resources (NP-complete class of complexity) and is unobtainable for large spatial grids of candidate sensor locations. For this reason, a greedy algorithm for approximate solution was recently introduced [S. N. Vecherin, D. K. Wilson, and C. L. Pettit, "Optimal sensor placement with terrain-based constraints and signal propagation effects," Unattended Ground, Sea, and Air Sensor Technologies and Applications XI, SPIE Proc. Vol. 7333, paper 73330S (2009)]. Here further extensions to the developed algorithm are presented to include such practical needs and constraints as sensor availability, coverage by multiple sensors, and wireless communication of the sensor information. Both communication and detection are considered in a probabilistic framework. Communication signal and signature propagation effects are taken into account when calculating probabilities of communication and detection. Comparison of approximate and strict solutions on reduced-size problems suggests that the approximate algorithm yields quick and good solutions, which thus justifies using that algorithm for full-size problems. Examples of three-dimensional outdoor sensor placement are provided using a terrain-based software analysis tool.
Towards the optimal design of an uncemented acetabular component using genetic algorithms
NASA Astrophysics Data System (ADS)
Ghosh, Rajesh; Pratihar, Dilip Kumar; Gupta, Sanjay
2015-12-01
Aseptic loosening of the acetabular component (hemispherical socket of the pelvic bone) has been mainly attributed to bone resorption and excessive generation of wear particle debris. The aim of this study was to determine optimal design parameters for the acetabular component that would minimize bone resorption and volumetric wear. Three-dimensional finite element models of intact and implanted pelvises were developed using data from computed tomography scans. A multi-objective optimization problem was formulated and solved using a genetic algorithm. A combination of suitable implant material and corresponding set of optimal thicknesses of the component was obtained from the Pareto-optimal front of solutions. The ultra-high-molecular-weight polyethylene (UHMWPE) component generated considerably greater volumetric wear but lower bone density loss compared to carbon-fibre reinforced polyetheretherketone (CFR-PEEK) and ceramic. CFR-PEEK was located in the range between ceramic and UHMWPE. Although ceramic appeared to be a viable alternative to cobalt-chromium-molybdenum alloy, CFR-PEEK seems to be the most promising alternative material.
NASA Astrophysics Data System (ADS)
Mallick, Rajnish; Ganguli, Ranjan; Seetharama Bhat, M.
2015-09-01
The objective of this study is to determine an optimal trailing edge flap configuration and flap location to achieve minimum hub vibration levels and flap actuation power simultaneously. An aeroelastic analysis of a soft in-plane four-bladed rotor is performed in conjunction with optimal control. A second-order polynomial response surface based on an orthogonal array (OA) with 3-level design describes both the objectives adequately. Two new orthogonal arrays called MGB2P-OA and MGB4P-OA are proposed to generate nonlinear response surfaces with all interaction terms for two and four parameters, respectively. A multi-objective bat algorithm (MOBA) approach is used to obtain the optimal design point for the mutually conflicting objectives. MOBA is a recently developed nature-inspired metaheuristic optimization algorithm that is based on the echolocation behaviour of bats. It is found that MOBA inspired Pareto optimal trailing edge flap design reduces vibration levels by 73% and flap actuation power by 27% in comparison with the baseline design.
Optimal planning and design of a renewable energy based supply system for microgrids
Hafez, Omar; Bhattacharya, Kankar
2012-03-03
This paper presents a technique for optimal planning and design of hybrid renewable energy systems for microgrid applications. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is used to determine the optimal size and type of distributed energy resources (DERs) and their operating schedules for a sample utility distribution system. Using the DER-CAM results, an evaluation is performed to evaluate the electrical performance of the distribution circuit if the DERs selected by the DER-CAM optimization analyses are incorporated. Results of analyses regarding the economic benefits of utilizing the optimal locations identified for the selected DER within the system are alsomore » presented. The actual Brookhaven National Laboratory (BNL) campus electrical network is used as an example to show the effectiveness of this approach. The results show that these technical and economic analyses of hybrid renewable energy systems are essential for the efficient utilization of renewable energy resources for microgird applications.« less
NASA Astrophysics Data System (ADS)
Khatir, Samir; Dekemele, Kevin; Loccufier, Mia; Khatir, Tawfiq; Abdel Wahab, Magd
2018-02-01
In this paper, a technique is presented for the detection and localization of an open crack in beam-like structures using experimentally measured natural frequencies and the Particle Swarm Optimization (PSO) method. The technique considers the variation in local flexibility near the crack. The natural frequencies of a cracked beam are determined experimentally and numerically using the Finite Element Method (FEM). The optimization algorithm is programmed in MATLAB. The algorithm is used to estimate the location and severity of a crack by minimizing the differences between measured and calculated frequencies. The method is verified using experimentally measured data on a cantilever steel beam. The Fourier transform is adopted to improve the frequency resolution. The results demonstrate the good accuracy of the proposed technique.
MUSIC electromagnetic imaging with enhanced resolution for small inclusions
NASA Astrophysics Data System (ADS)
Chen, Xudong; Zhong, Yu
2009-01-01
This paper investigates the influence of the test dipole on the resolution of the multiple signal classification (MUSIC) imaging method applied to the electromagnetic inverse scattering problem of determining the locations of a collection of small objects embedded in a known background medium. Based on the analysis of the induced electric dipoles in eigenstates, an algorithm is proposed to determine the test dipole that generates a pseudo-spectrum with enhanced resolution. The amplitudes in three directions of the optimal test dipole are not necessarily in phase, i.e., the optimal test dipole may not correspond to a physical direction in the real three-dimensional space. In addition, the proposed test-dipole-searching algorithm is able to deal with some special scenarios, due to the shapes and materials of objects, to which the standard MUSIC does not apply.
A new MUSIC electromagnetic imaging method with enhanced resolution for small inclusions
NASA Astrophysics Data System (ADS)
Zhong, Yu; Chen, Xudong
2008-11-01
This paper investigates the influence of test dipole on the resolution of the multiple signal classification (MUSIC) imaging method applied to the electromagnetic inverse scattering problem of determining the locations of a collection of small objects embedded in a known background medium. Based on the analysis of the induced electric dipoles in eigenstates, an algorithm is proposed to determine the test dipole that generates a pseudo-spectrum with enhanced resolution. The amplitudes in three directions of the optimal test dipole are not necessarily in phase, i.e., the optimal test dipole may not correspond to a physical direction in the real three-dimensional space. In addition, the proposed test-dipole-searching algorithm is able to deal with some special scenarios, due to the shapes and materials of objects, to which the standard MUSIC doesn't apply.
Hybrid PV/diesel solar power system design using multi-level factor analysis optimization
NASA Astrophysics Data System (ADS)
Drake, Joshua P.
Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.
NASA Astrophysics Data System (ADS)
Bargegol, Iraj; Ghorbanzadeh, Mahyar; Ghasedi, Meisam; Rastbod, Mohammad
2017-10-01
In congested cities, locating and proper designing of bus stops according to the unequal distribution of passengers is crucial issue economically and functionally, since this subject plays an important role in the use of bus system by passengers. Location of bus stops is a complicated subject; by reducing distances between stops, walking time decreases, but the total travel time may increase. In this paper, a specified corridor in the city of Rasht in north of Iran is studied. Firstly, a new formula is presented to calculate the travel time, by which the number of stops and consequently, the travel time can be optimized. An intended corridor with specified number of stops and distances between them is addressed, the related formulas to travel time are created, and its travel time is calculated. Then the corridor is modelled using a meta-heuristic method in order that the placement and the optimal distances of bus stops for that are determined. It was found that alighting and boarding time along with bus capacity are the most effective factors affecting travel time. Consequently, it is better to have more concentration on indicated factors for improving the efficiency of bus system.
Optimization of the Number and Location of Tsunami Stations in a Tsunami Warning System
NASA Astrophysics Data System (ADS)
An, C.; Liu, P. L. F.; Pritchard, M. E.
2014-12-01
Optimizing the number and location of tsunami stations in designing a tsunami warning system is an important and practical problem. It is always desirable to maximize the capability of the data obtained from the stations for constraining the earthquake source parameters, and to minimize the number of stations at the same time. During the 2011 Tohoku tsunami event, 28 coastal gauges and DART buoys in the near-field recorded tsunami waves, providing an opportunity for assessing the effectiveness of those stations in identifying the earthquake source parameters. Assuming a single-plane fault geometry, inversions of tsunami data from combinations of various number (1~28) of stations and locations are conducted and evaluated their effectiveness according to the residues of the inverse method. Results show that the optimized locations of stations depend on the number of stations used. If the stations are optimally located, 2~4 stations are sufficient to constrain the source parameters. Regarding the optimized location, stations must be uniformly spread in all directions, which is not surprising. It is also found that stations within the source region generally give worse constraint of earthquake source than stations farther from source, which is due to the exaggeration of model error in matching large amplitude waves at near-source stations. Quantitative discussions on these findings will be given in the presentation. Applying similar analysis to the Manila Trench based on artificial scenarios of earthquakes and tsunamis, the optimal location of tsunami stations are obtained, which provides guidance of deploying a tsunami warning system in this region.
NASA Astrophysics Data System (ADS)
Biglar, Mojtaba; Mirdamadi, Hamid Reza; Danesh, Mohammad
2014-02-01
In this study, the active vibration control and configurational optimization of a cylindrical shell are analyzed by using piezoelectric transducers. The piezoelectric patches are attached to the surface of the cylindrical shell. The Rayleigh-Ritz method is used for deriving dynamic modeling of cylindrical shell and piezoelectric sensors and actuators based on the Donnel-Mushtari shell theory. The major goal of this study is to find the optimal locations and orientations of piezoelectric sensors and actuators on the cylindrical shell. The optimization procedure is designed based on desired controllability and observability of each contributed and undesired mode. Further, in order to limit spillover effects, the residual modes are taken into consideration. The optimization variables are the positions and orientations of piezoelectric patches. Genetic algorithm is utilized to evaluate the optimal configurations. In this article, for improving the maximum power and capacity of actuators for amplitude depreciation of negative velocity feedback strategy, we have proposed a new control strategy, called "Saturated Negative Velocity Feedback Rule (SNVF)". The numerical results show that the optimization procedure is effective for vibration reduction, and specifically, by locating actuators and sensors in their optimal locations and orientations, the vibrations of cylindrical shell are suppressed more quickly.
NASA Astrophysics Data System (ADS)
Wang, Ji; Zhang, Ru; Yan, Yuting; Dong, Xiaoqiang; Li, Jun Ming
2017-05-01
Hazardous gas leaks in the atmosphere can cause significant economic losses in addition to environmental hazards, such as fires and explosions. A three-stage hazardous gas leak source localization method was developed that uses movable and stationary gas concentration sensors. The method calculates a preliminary source inversion with a modified genetic algorithm (MGA) and has the potential to crossover with eliminated individuals from the population, following the selection of the best candidate. The method then determines a search zone using Markov Chain Monte Carlo (MCMC) sampling, utilizing a partial evaluation strategy. The leak source is then accurately localized using a modified guaranteed convergence particle swarm optimization algorithm with several bad-performing individuals, following selection of the most successful individual with dynamic updates. The first two stages are based on data collected by motionless sensors, and the last stage is based on data from movable robots with sensors. The measurement error adaptability and the effect of the leak source location were analyzed. The test results showed that this three-stage localization process can localize a leak source within 1.0 m of the source for different leak source locations, with measurement error standard deviation smaller than 2.0.
NASA Astrophysics Data System (ADS)
Haagmans, G. G.; Verhagen, S.; Voûte, R. L.; Verbree, E.
2017-09-01
Since GPS tends to fail for indoor positioning purposes, alternative methods like indoor positioning systems (IPS) based on Bluetooth low energy (BLE) are developing rapidly. Generally, IPS are deployed in environments covered with obstacles such as furniture, walls, people and electronics influencing the signal propagation. The major factor influencing the system performance and to acquire optimal positioning results is the geometry of the beacons. The geometry of the beacons is limited to the available infrastructure that can be deployed (number of beacons, basestations and tags), which leads to the following challenge: Given a limited number of beacons, where should they be placed in a specified indoor environment, such that the geometry contributes to optimal positioning results? This paper aims to propose a statistical model that is able to select the optimal configuration that satisfies the user requirements in terms of precision. The model requires the definition of a chosen 3D space (in our case 7 × 10 × 6 meter), number of beacons, possible user tag locations and a performance threshold (e.g. required precision). For any given set of beacon and receiver locations, the precision, internal- and external reliability can be determined on forehand. As validation, the modeled precision has been compared with observed precision results. The measurements have been performed with an IPS of BlooLoc at a chosen set of user tag locations for a given geometric configuration. Eventually, the model is able to select the optimal geometric configuration out of millions of possible configurations based on a performance threshold (e.g. required precision).
Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations
Kim, Dong Hyun; Lee, Sang Wook; Park, Hyung-Soon
2016-01-01
Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC) joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF). To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°). The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices. PMID:27240364
Optimal correction and design parameter search by modern methods of rigorous global optimization
NASA Astrophysics Data System (ADS)
Makino, K.; Berz, M.
2011-07-01
Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle optics for the computation of aberrations allow the determination of particularly sharp underestimators for large regions. As a consequence, the subsequent progressive pruning of the allowed search space as part of the optimization progresses is carried out particularly effectively. The end result is the rigorous determination of the single or multiple optimal solutions of the parameter optimization, regardless of their location, their number, and the starting values of optimization. The methods are particularly powerful if executed in interplay with genetic optimizers generating their new populations within the currently active unpruned space. Their current best guess provides rigorous upper bounds of the minima, which can then beneficially be used for better pruning. Examples of the method and its performance will be presented, including the determination of all operating points of desired tunes or chromaticities, etc. in storage ring lattices.
Optimal Location of Radiation Therapy Centers With Respect to Geographic Access
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santibáñez, Pablo; Gaudet, Marc; French, John
2014-07-15
Purpose: To develop a framework with which to evaluate locations of radiation therapy (RT) centers in a region based on geographic access. Methods and Materials: Patient records were obtained for all external beam radiation therapy started in 2011 for the province of British Columbia, Canada. Two metrics of geographic access were defined. The primary analysis was percentage of patients (coverage) within a 90-minute drive from an RT center (C90), and the secondary analysis was the average drive time (ADT) to an RT center. An integer programming model was developed to determine optimal center locations, catchment areas, and capacity required undermore » different scenarios. Results: Records consisted of 11,096 courses of radiation corresponding to 161,616 fractions. Baseline geographic access was estimated at 102.5 minutes ADT (each way, per fraction) and 75.9% C90. Adding 2 and 3 new centers increased C90 to 88% and 92%, respectively, and decreased ADT by between 43% and 61%, respectively. A scenario in which RT was provided in every potential location that could support at least 1 fully utilized linear accelerator resulted in 35.3 minutes' ADT and 93.6% C90. Conclusions: The proposed framework and model provide a data-driven means to quantitatively evaluate alternative configurations of a regional RT system. Results suggest that the choice of location for future centers can significantly improve geographic access to RT.« less
Assessment of regional management strategies for controlling seawater intrusion
Reichard, E.G.; Johnson, T.A.
2005-01-01
Simulation-optimization methods, applied with adequate sensitivity tests, can provide useful quantitative guidance for controlling seawater intrusion. This is demonstrated in an application to the West Coast Basin of coastal Los Angeles that considers two management options for improving hydraulic control of seawater intrusion: increased injection into barrier wells and in lieu delivery of surface water to replace current pumpage. For the base-case optimization analysis, assuming constant groundwater demand, in lieu delivery was determined to be most cost effective. Reduced-cost information from the optimization provided guidance for prioritizing locations for in lieu delivery. Model sensitivity to a suite of hydrologic, economic, and policy factors was tested. Raising the imposed average water-level constraint at the hydraulic-control locations resulted in nonlinear increases in cost. Systematic varying of the relative costs of injection and in lieu water yielded a trade-off curve between relative costs and injection/in lieu amounts. Changing the assumed future scenario to one of increasing pumpage in the adjacent Central Basin caused a small increase in the computed costs of seawater intrusion control. Changing the assumed boundary condition representing interaction with an adjacent basin did not affect the optimization results. Reducing the assumed hydraulic conductivity of the main productive aquifer resulted in a large increase in the model-computed cost. Journal of Water Resources Planning and Management ?? ASCE.
Using Reanalysis Data for the Prediction of Seasonal Wind Turbine Power Losses Due to Icing
NASA Astrophysics Data System (ADS)
Burtch, D.; Mullendore, G. L.; Delene, D. J.; Storm, B.
2013-12-01
The Northern Plains region of the United States is home to a significant amount of potential wind energy. However, in winter months capturing this potential power is severely impacted by the meteorological conditions, in the form of icing. Predicting the expected loss in power production due to icing is a valuable parameter that can be used in wind turbine operations, determination of wind turbine site locations and long-term energy estimates which are used for financing purposes. Currently, losses due to icing must be estimated when developing predictions for turbine feasibility and financing studies, while icing maps, a tool commonly used in Europe, are lacking in the United States. This study uses the Modern-Era Retrospective Analysis for Research and Applications (MERRA) dataset in conjunction with turbine production data to investigate various methods of predicting seasonal losses (October-March) due to icing at two wind turbine sites located 121 km apart in North Dakota. The prediction of icing losses is based on temperature and relative humidity thresholds and is accomplished using three methods. For each of the three methods, the required atmospheric variables are determined in one of two ways: using industry-specific software to correlate anemometer data in conjunction with the MERRA dataset and using only the MERRA dataset for all variables. For each season, a percentage of the total expected generated power lost due to icing is determined and compared to observed losses from the production data. An optimization is performed in order to determine the relative humidity threshold that minimizes the difference between the predicted and observed values. Eight seasons of data are used to determine an optimal relative humidity threshold, and a further three seasons of data are used to test this threshold. Preliminary results have shown that the optimized relative humidity threshold for the northern turbine is higher than the southern turbine for all methods. For the three test seasons, the optimized thresholds tend to under-predict the icing losses. However, the threshold determined using boundary layer similarity theory most closely predicts the power losses due to icing versus the other methods. For the northern turbine, the average predicted power loss over the three seasons is 4.65 % while the observed power loss is 6.22 % (average difference of 1.57 %). For the southern turbine, the average predicted power loss and observed power loss over the same time period are 4.43 % and 6.16 %, respectively (average difference of 1.73 %). The three-year average, however, does not clearly capture the variability that exists season-to-season. On examination of each of the test seasons individually, the optimized relative humidity threshold methodology performs better than fixed power loss estimates commonly used in the wind energy industry.
NASA Astrophysics Data System (ADS)
Kang, Fei; Li, Junjie; Ma, Zhenyue
2013-02-01
Determination of the critical slip surface with the minimum factor of safety of a slope is a difficult constrained global optimization problem. In this article, an artificial bee colony algorithm with a multi-slice adjustment method is proposed for locating the critical slip surfaces of soil slopes, and the Spencer method is employed to calculate the factor of safety. Six benchmark examples are presented to illustrate the reliability and efficiency of the proposed technique, and it is also compared with some well-known or recent algorithms for the problem. The results show that the new algorithm is promising in terms of accuracy and efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id
Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment.more » We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.« less
Optimal joint management of a coastal aquifer and a substitute resource
NASA Astrophysics Data System (ADS)
Moreaux, M.; Reynaud, A.
2004-06-01
This article characterizes the optimal joint management of a coastal aquifer and a costly water substitute. For this purpose we use a mathematical representation of the aquifer that incorporates the displacement of the interface between the seawater and the freshwater of the aquifer. We identify the spatial cost externalities created by users on each other and we show that the optimal water supply depends on the location of users. Users located in the coastal zone exclusively use the costly substitute. Those located in the more upstream area are supplied from the aquifer. At the optimum their withdrawal must take into account the cost externalities they generate on users located downstream. Last, users located in a median zone use the aquifer with a surface transportation cost. We show that the optimum can be implemented in a decentralized economy through a very simple Pigouvian tax. Finally, the optimal and decentralized extraction policies are simulated on a very simple example.
Spacecraft Leak Location Using Structure-Borne Noise
NASA Astrophysics Data System (ADS)
Reusser, R. S.; Chimenti, D. E.; Holland, S. D.; Roberts, R. A.
2010-02-01
Guided ultrasonic waves, generated by air escaping through a small hole, have been measured with an 8×8 piezoelectric phased-array detector. Rapid location of air leaks in a spacecraft skin, caused by high-speed collisions with small objects, is essential for astronaut survival. Cross correlation of all 64 elements, one pair at a time, on a diced PZT disc combined with synthetic aperture analysis determines the dominant direction of wave propagation. The leak location is triangulated by combining data from two or more detector. To optimize the frequency band selection for the most robust direction finding, noise-field measurements of a plate with integral stiffeners have been performed using laser Doppler velocimetry. We compare optical and acoustic measurements to analyze the influence of the PZT array detector and its mechanical coupling to the plate.
Application of genetic algorithms to focal mechanism determination
NASA Astrophysics Data System (ADS)
Kobayashi, Reiji; Nakanishi, Ichiro
1994-04-01
Genetic algorithms are a new class of methods for global optimization. They resemble Monte Carlo techniques, but search for solutions more efficiently than uniform Monte Carlo sampling. In the field of geophysics, genetic algorithms have recently been used to solve some non-linear inverse problems (e.g., earthquake location, waveform inversion, migration velocity estimation). We present an application of genetic algorithms to focal mechanism determination from first-motion polarities of P-waves and apply our method to two recent large events, the Kushiro-oki earthquake of January 15, 1993 and the SW Hokkaido (Japan Sea) earthquake of July 12, 1993. Initial solution and curvature information of the objective function that gradient methods need are not required in our approach. Moreover globally optimal solutions can be efficiently obtained. Calculation of polarities based on double-couple models is the most time-consuming part of the source mechanism determination. The amount of calculations required by the method designed in this study is much less than that of previous grid search methods.
Overcoming Spatial and Temporal Barriers to Public Access Defibrillators Via Optimization
Sun, Christopher L. F.; Demirtas, Derya; Brooks, Steven C.; Morrison, Laurie J.; Chan, Timothy C.Y.
2016-01-01
BACKGROUND Immediate access to an automated external defibrillator (AED) increases the chance of survival from out-of-hospital cardiac arrest (OHCA). Current deployment usually considers spatial AED access, assuming AEDs are available 24 h a day. OBJECTIVES We sought to develop an optimization model for AED deployment, accounting for spatial and temporal accessibility, to evaluate if OHCA coverage would improve compared to deployment based on spatial accessibility alone. METHODS This was a retrospective population-based cohort study using data from the Toronto Regional RescuNET cardiac arrest database. We identified all nontraumatic public-location OHCAs in Toronto, Canada (January 2006 through August 2014) and obtained a list of registered AEDs (March 2015) from Toronto emergency medical services. We quantified coverage loss due to limited temporal access by comparing the number of OHCAs that occurred within 100 meters of a registered AED (assumed 24/7 coverage) with the number that occurred both within 100 meters of a registered AED and when the AED was available (actual coverage). We then developed a spatiotemporal optimization model that determined AED locations to maximize OHCA actual coverage and overcome the reported coverage loss. We computed the coverage gain between the spatiotemporal model and a spatial-only model using 10-fold cross-validation. RESULTS We identified 2,440 atraumatic public OHCAs and 737 registered AED locations. A total of 451 OHCAs were covered by registered AEDs under assumed 24/7 coverage, and 354 OHCAs under actual coverage, representing a coverage loss of 21.5% (p < 0.001). Using the spatiotemporal model to optimize AED deployment, a 25.3% relative increase in actual coverage was achieved over the spatial-only approach (p < 0.001). CONCLUSIONS One in 5 OHCAs occurred near an inaccessible AED at the time of the OHCA. Potential AED use was significantly improved with a spatiotemporal optimization model guiding deployment. PMID:27539176
Compliant flow designs for optimum lift control of wind turbine rotors
NASA Astrophysics Data System (ADS)
Williams, Theodore J. H.
An optimization approach was formulated to determine geometric designs that are most compliant to flow control devices. Single dielectric barrier discharge (SDBD) plasma actuators are used in the flow control design optimization as they are able to be incorporated into CFD simulations. An adjoint formulation was derived in order to have a numerically efficient way of calculating the shape derivatives on the surface of the geometric design. The design of a wind turbine blade retrofit for the JIMP 25kW wind turbine at Notre Dame is used to motivate analyses that utilize the optimization approach. The CFD simulations of the existing wind turbine blade were validated against wind tunnel testing. A one-parameter optimization was performed in order to design a trailing edge addition for the current wind turbine blade. The trailing edge addition was designed to meet a desired lift target while maximizing the lift-to-drag ratio. This analysis was performed at seven radial locations on the wind turbine blade. The new trailing edge retrofits were able to achieve the lift target for the outboard radial locations. The designed geometry has been fabricated and is currently being validated on a full-scale turbine and it is predicted to have an increase in annual energy production of 4.30%. The design of a trailing edge retrofit that includes the use of a SDBD plasma actuator was performed using a two-parameter optimization. The objective of this analysis was to meet the lift target and maximize the controllability of the design. The controllability is defined as the difference in lift between plasma on and plasma off cases. A trailing edge retrofit with the plasma actuator located on the pressure side was able to achieve the target passive lift increase while using plasma flow control to reduce the lift to below the original design. This design resulted in a highly compliant flow.
Jácome, Gabriel; Valarezo, Carla; Yoo, Changkyoo
2018-03-30
Pollution and the eutrophication process are increasing in lake Yahuarcocha and constant water quality monitoring is essential for a better understanding of the patterns occurring in this ecosystem. In this study, key sensor locations were determined using spatial and temporal analyses combined with geographical information systems (GIS) to assess the influence of weather features, anthropogenic activities, and other non-point pollution sources. A water quality monitoring network was established to obtain data on 14 physicochemical and microbiological parameters at each of seven sample sites over a period of 13 months. A spatial and temporal statistical approach using pattern recognition techniques, such as cluster analysis (CA) and discriminant analysis (DA), was employed to classify and identify the most important water quality parameters in the lake. The original monitoring network was reduced to four optimal sensor locations based on a fuzzy overlay of the interpolations of concentration variations of the most important parameters.
Vibration suppression for large scale adaptive truss structures using direct output feedback control
NASA Technical Reports Server (NTRS)
Lu, Lyan-Ywan; Utku, Senol; Wada, Ben K.
1993-01-01
In this article, the vibration control of adaptive truss structures, where the control actuation is provided by length adjustable active members, is formulated as a direct output feedback control problem. A control method named Model Truncated Output Feedback (MTOF) is presented. The method allows the control feedback gain to be determined in a decoupled and truncated modal space in which only the critical vibration modes are retained. The on-board computation required by MTOF is minimal; thus, the method is favorable for the applications of vibration control of large scale structures. The truncation of the modal space inevitably introduces spillover effect during the control process. In this article, the effect is quantified in terms of active member locations, and it is shown that the optimal placement of active members, which minimizes the spillover effect (and thus, maximizes the control performance) can be sought. The problem of optimally selecting the locations of active members is also treated.
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk
2006-12-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
Yobbi, Dann K.
2002-01-01
Tampa Bay depends on ground water for most of the water supply. Numerous wetlands and lakes in Pasco County have been impacted by the high demand for ground water. Central Pasco County, particularly the area within the Cypress Creek well field, has been greatly affected. Probable causes for the decline in surface-water levels are well-field pumpage and a decade-long drought. Efforts are underway to increase surface-water levels by developing alternative sources of water supply, thus reducing the quantity of well-field pumpage. Numerical ground-water flow simulations coupled with an optimization routine were used in a series of simulations to test the sensitivity of optimal pumpage to desired increases in surficial aquifer system heads in the Cypress Creek well field. The ground-water system was simulated using the central northern Tampa Bay ground-water flow model. Pumping solutions for 1987 equilibrium conditions and for a transient 6-month timeframe were determined for five test cases, each reflecting a range of desired target recovery heads at different head control sites in the surficial aquifer system. Results are presented in the form of curves relating average head recovery to total optimal pumpage. Pumping solutions are sensitive to the location of head control sites formulated in the optimization problem and as expected, total optimal pumpage decreased when desired target head increased. The distribution of optimal pumpage for individual production wells also was significantly affected by the location of head control sites. A pumping advantage was gained for test-case formulations where hydraulic heads were maximized in cells near the production wells, in cells within the steady-state pumping center cone of depression, and in cells within the area of the well field where confining-unit leakance is the highest. More water was pumped and the ratio of head recovery per unit decrease in optimal pumpage was more than double for test cases where hydraulic heads are maximized in cells located at or near the production wells. Additionally, the ratio of head recovery per unit decrease in pumpage was about three times more for the area where confining-unit leakance is the highest than for other leakance zone areas of the well field. For many head control sites, optimal heads corresponding to optimal pumpage deviated from the desired target recovery heads. Overall, pumping solutions were constrained by the limiting recovery values, initial head conditions, and by upper boundary conditions of the ground-water flow model.
US EPA OPTIMAL WELL LOCATOR (OWL): A SCREENING TOOL FOR EVALUATING LOCATIONS OF MONITORING WELLS
The Optimal Well Locator (OWL): uses linear regression to fit a plane to the elevation of the water table in monitoring wells in each round of sampling. The slope of the plane fit to the water table is used to predict the direction and gradient of ground water flow. Along with ...
Zhou, Yuan; Shi, Tie-Mao; Hu, Yuan-Man; Gao, Chang; Liu, Miao; Song, Lin-Qi
2011-12-01
Based on geographic information system (GIS) technology and multi-objective location-allocation (LA) model, and in considering of four relatively independent objective factors (population density level, air pollution level, urban heat island effect level, and urban land use pattern), an optimized location selection for the urban parks within the Third Ring of Shenyang was conducted, and the selection results were compared with the spatial distribution of existing parks, aimed to evaluate the rationality of the spatial distribution of urban green spaces. In the location selection of urban green spaces in the study area, the factor air pollution was most important, and, compared with single objective factor, the weighted analysis results of multi-objective factors could provide optimized spatial location selection of new urban green spaces. The combination of GIS technology with LA model would be a new approach for the spatial optimizing of urban green spaces.
Extreme learning machine based optimal embedding location finder for image steganography
Aljeroudi, Yazan
2017-01-01
In image steganography, determining the optimum location for embedding the secret message precisely with minimum distortion of the host medium remains a challenging issue. Yet, an effective approach for the selection of the best embedding location with least deformation is far from being achieved. To attain this goal, we propose a novel approach for image steganography with high-performance, where extreme learning machine (ELM) algorithm is modified to create a supervised mathematical model. This ELM is first trained on a part of an image or any host medium before being tested in the regression mode. This allowed us to choose the optimal location for embedding the message with best values of the predicted evaluation metrics. Contrast, homogeneity, and other texture features are used for training on a new metric. Furthermore, the developed ELM is exploited for counter over-fitting while training. The performance of the proposed steganography approach is evaluated by computing the correlation, structural similarity (SSIM) index, fusion matrices, and mean square error (MSE). The modified ELM is found to outperform the existing approaches in terms of imperceptibility. Excellent features of the experimental results demonstrate that the proposed steganographic approach is greatly proficient for preserving the visual information of an image. An improvement in the imperceptibility as much as 28% is achieved compared to the existing state of the art methods. PMID:28196080
NASA Astrophysics Data System (ADS)
Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo
2016-03-01
In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.
A Framework for Developing Mobile Location Based Applications
2006-10-01
B into the application running on her mobile device. The mobile application contacts the appropriate service provided by T- mobile which calculates...the optimal route between points A and B. The mobile application then displays driving directions to point B in a fashion similar to one found in car...navigation systems. The mobile application requires a Bluetooth GPS receiver to be connected to the mobile device to determine its current position
Lee, Minji; Kim, Yun-Hee; Im, Chang-Hwan; Kim, Jung-Hoon; Park, Chang-hyun; Chang, Won Hyuk; Lee, Ahee
2015-01-01
Transcranial direct current stimulation (tDCS) non-invasively modulates brain function by inducing neuronal excitability. The conventional hot spot for inducing the highest current density in the hand motor area may not be the optimal site for effective stimulation. In this study, we investigated the influence of the center position of the anodal electrode on changes in motor cortical excitability. We considered three tDCS conditions in 16 healthy subjects: (i) real stimulation with the anodal electrode located at the conventional hand motor hot spot determined by motor evoked potentials (MEPs); (ii) real stimulation with the anodal electrode located at the point with the highest current density in the hand motor area as determined by electric current simulation; and (iii) sham stimulation. Motor cortical excitability as measured by MEP amplitude increased after both real stimulation conditions, but not after sham stimulation. Stimulation using the simulation-derived anodal electrode position, which was found to be posterior to the MEP hot spot for all subjects, induced higher motor cortical excitability. Individual positioning of the anodal electrode, based on the consideration of anatomical differences between subjects, appears to be important for maximizing the effects of tDCS. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bai, F.; Gagar, D.; Foote, P.; Zhao, Y.
2017-02-01
Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors in an array is essential in performing localisation. Currently, this is determined using a fixed threshold which is particularly prone to errors when not set to optimal values. This paper presents three new methods for determining the onset of AE signals without the need for a predetermined threshold. The performance of the techniques is evaluated using AE signals generated during fatigue crack growth and compared to the established Akaike Information Criterion (AIC) and fixed threshold methods. It was found that the 1D location accuracy of the new methods was within the range of < 1 - 7.1 % of the monitored region compared to 2.7% for the AIC method and a range of 1.8-9.4% for the conventional Fixed Threshold method at different threshold levels.
Optimization of self-acting step thrust bearings for load capacity and stiffness.
NASA Technical Reports Server (NTRS)
Hamrock, B. J.
1972-01-01
Linearized analysis of a finite-width rectangular step thrust bearing. Dimensionless load capacity and stiffness are expressed in terms of a Fourier cosine series. The dimensionless load capacity and stiffness were found to be a function of the dimensionless bearing number, the pad length-to-width ratio, the film thickness ratio, the step location parameter, and the feed groove parameter. The equations obtained in the analysis were verified. The assumptions imposed were substantiated by comparing the results with an existing exact solution for the infinite width bearing. A digital computer program was developed which determines optimal bearing configuration for maximum load capacity or stiffness. Simple design curves are presented. Results are shown for both compressible and incompressible lubrication. Through a parameter transformation the results are directly usable in designing optimal step sector thrust bearings.
NASA Astrophysics Data System (ADS)
Konda, Chiharu; Londry, Frank A.; Bendiak, Brad; Xia, Yu
2014-08-01
A systematic approach is described that can pinpoint the stereo-structures (sugar identity, anomeric configuration, and location) of individual sugar units within linear oligosaccharides. Using a highly modified mass spectrometer, dissociation of linear oligosaccharides in the gas phase was optimized along multiple-stage tandem dissociation pathways (MSn, n = 4 or 5). The instrument was a hybrid triple quadrupole/linear ion trap mass spectrometer capable of high-efficiency bidirectional ion transfer between quadrupole arrays. Different types of collision-induced dissociation (CID), either on-resonance ion trap or beam-type CID could be utilized at any given stage of dissociation, enabling either glycosidic bond cleavages or cross-ring cleavages to be maximized when wanted. The approach first involves optimizing the isolation of disaccharide units as an ordered set of overlapping substructures via glycosidic bond cleavages during early stages of MSn, with explicit intent to minimize cross-ring cleavages. Subsequently, cross-ring cleavages were optimized for individual disaccharides to yield key diagnostic product ions ( m/ z 221). Finally, fingerprint patterns that establish stereochemistry and anomeric configuration were obtained from the diagnostic ions via CID. Model linear oligosaccharides were derivatized at the reducing end, allowing overlapping ladders of disaccharides to be isolated from MSn. High confidence stereo-structural determination was achieved by matching MSn CID of the diagnostic ions to synthetic standards via a spectral matching algorithm. Using this MSn ( n = 4 or 5) approach, the stereo-structures, anomeric configurations, and locations of three individual sugar units within two pentasaccharides were successfully determined.
Hippocampal brain-network coordination during volitional exploratory behavior enhances learning
Voss, Joel L.; Gonsalves, Brian D.; Federmeier, Kara D.; Tranel, Daniel; Cohen, Neal J.
2010-01-01
Exploratory behaviors during learning determine what is studied and when, helping to optimize subsequent memory performance. We manipulated how much control subjects had over the position of a moving window through which they studied objects and their locations, in order to elucidate the cognitive and neural determinants of exploratory behaviors. Our behavioral, neuropsychological, and neuroimaging data indicate volitional control benefits memory performance, and is linked to a brain network centered on the hippocampus. Increases in correlated activity between the hippocampus and other areas were associated with specific aspects of memory, suggesting that volitional control optimizes interactions among specialized neural systems via the hippocampus. Memory is therefore an active process intrinsically linked to behavior. Furthermore, brain structures typically seen as passive participants in memory encoding (e.g., the hippocampus) are actually part of an active network that controls behavior dynamically as it unfolds. PMID:21102449
Self-learning control system for plug-in hybrid vehicles
DeVault, Robert C [Knoxville, TN
2010-12-14
A system is provided to instruct a plug-in hybrid electric vehicle how optimally to use electric propulsion from a rechargeable energy storage device to reach an electric recharging station, while maintaining as high a state of charge (SOC) as desired along the route prior to arriving at the recharging station at a minimum SOC. The system can include the step of calculating a straight-line distance and/or actual distance between an orientation point and the determined instant present location to determine when to initiate optimally a charge depleting phase. The system can limit extended driving on a deeply discharged rechargeable energy storage device and reduce the number of deep discharge cycles for the rechargeable energy storage device, thereby improving the effective lifetime of the rechargeable energy storage device. This "Just-in-Time strategy can be initiated automatically without operator input to accommodate the unsophisticated operator and without needing a navigation system/GPS input.
Hippocampal brain-network coordination during volitional exploratory behavior enhances learning.
Voss, Joel L; Gonsalves, Brian D; Federmeier, Kara D; Tranel, Daniel; Cohen, Neal J
2011-01-01
Exploratory behaviors during learning determine what is studied and when, helping to optimize subsequent memory performance. To elucidate the cognitive and neural determinants of exploratory behaviors, we manipulated the control that human subjects had over the position of a moving window through which they studied objects and their locations. Our behavioral, neuropsychological and neuroimaging data indicate that volitional control benefits memory performance and is linked to a brain network that is centered on the hippocampus. Increases in correlated activity between the hippocampus and other areas were associated with specific aspects of memory, which suggests that volitional control optimizes interactions among specialized neural systems through the hippocampus. Memory is therefore an active process that is intrinsically linked to behavior. Furthermore, brain structures that are typically seen as passive participants in memory encoding (for example, the hippocampus) are actually part of an active network that controls behavior dynamically as it unfolds.
Park, Jun -Sang; Ray, Atish K.; Dawson, Paul R.; ...
2016-05-02
A shrink-fit sample is manufactured with a Ti-8Al-1Mo-1V alloy to introduce a multiaxial residual stress field in the disk of the sample. A set of strain and orientation pole figures are measured at various locations across the disk using synchrotron high-energy X-ray diffraction. Two approaches—the traditional sin 2Ψ method and the bi-scale optimization method—are taken to determine the stresses in the disk based on the measured strain and orientation pole figures, to explore the range of solutions that are possible for the stress field within the disk. While the stress components computed using the sin 2Ψ method and the bi-scalemore » optimization method have similar trends, their magnitudes are significantly different. Lastly, it is suspected that the local texture variation in the material is the cause of this discrepancy.« less
An interval programming model for continuous improvement in micro-manufacturing
NASA Astrophysics Data System (ADS)
Ouyang, Linhan; Ma, Yizhong; Wang, Jianjun; Tu, Yiliu; Byun, Jai-Hyun
2018-03-01
Continuous quality improvement in micro-manufacturing processes relies on optimization strategies that relate an output performance to a set of machining parameters. However, when determining the optimal machining parameters in a micro-manufacturing process, the economics of continuous quality improvement and decision makers' preference information are typically neglected. This article proposes an economic continuous improvement strategy based on an interval programming model. The proposed strategy differs from previous studies in two ways. First, an interval programming model is proposed to measure the quality level, where decision makers' preference information is considered in order to determine the weight of location and dispersion effects. Second, the proposed strategy is a more flexible approach since it considers the trade-off between the quality level and the associated costs, and leaves engineers a larger decision space through adjusting the quality level. The proposed strategy is compared with its conventional counterparts using an Nd:YLF laser beam micro-drilling process.
A hydroeconomic modeling framework for optimal integrated management of forest and water
NASA Astrophysics Data System (ADS)
Garcia-Prats, Alberto; del Campo, Antonio D.; Pulido-Velazquez, Manuel
2016-10-01
Forests play a determinant role in the hydrologic cycle, with water being the most important ecosystem service they provide in semiarid regions. However, this contribution is usually neither quantified nor explicitly valued. The aim of this study is to develop a novel hydroeconomic modeling framework for assessing and designing the optimal integrated forest and water management for forested catchments. The optimization model explicitly integrates changes in water yield in the stands (increase in groundwater recharge) induced by forest management and the value of the additional water provided to the system. The model determines the optimal schedule of silvicultural interventions in the stands of the catchment in order to maximize the total net benefit in the system. Canopy cover and biomass evolution over time were simulated using growth and yield allometric equations specific for the species in Mediterranean conditions. Silvicultural operation costs according to stand density and canopy cover were modeled using local cost databases. Groundwater recharge was simulated using HYDRUS, calibrated and validated with data from the experimental plots. In order to illustrate the presented modeling framework, a case study was carried out in a planted pine forest (Pinus halepensis Mill.) located in south-western Valencia province (Spain). The optimized scenario increased groundwater recharge. This novel modeling framework can be used in the design of a "payment for environmental services" scheme in which water beneficiaries could contribute to fund and promote efficient forest management operations.
The Optimal Well Locator (OWL): uses linear regression to fit a plane to the elevation of the water table in monitoring wells in each round of sampling. The slope of the plane fit to the water table is used to predict the direction and gradient of ground water flow. Along with ...
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1992-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
Time-reversal optical tomography: detecting and locating extended targets in a turbid medium
NASA Astrophysics Data System (ADS)
Wu, Binlin; Cai, W.; Xu, M.; Gayen, S. K.
2012-03-01
Time Reversal Optical Tomography (TROT) is developed to locate extended target(s) in a highly scattering turbid medium, and estimate their optical strength and size. The approach uses Diffusion Approximation of Radiative Transfer Equation for light propagation along with Time Reversal (TR) Multiple Signal Classification (MUSIC) scheme for signal and noise subspaces for assessment of target location. A MUSIC pseudo spectrum is calculated using the eigenvectors of the TR matrix T, whose poles provide target locations. Based on the pseudo spectrum contours, retrieval of target size is modeled as an optimization problem, using a "local contour" method. The eigenvalues of T are related to optical strengths of targets. The efficacy of TROT to obtain location, size, and optical strength of one absorptive target, one scattering target, and two absorptive targets, all for different noise levels was tested using simulated data. Target locations were always accurately determined. Error in optical strength estimates was small even at 20% noise level. Target size and shape were more sensitive to noise. Results from simulated data demonstrate high potential for application of TROT in practical biomedical imaging applications.
Baek, Seung Ok; Cho, Hee Kyung; Jung, Gil Su; Son, Su Min; Cho, Yun Woo; Ahn, Sang Ho
2014-09-01
Transcutaneous neuromuscular electrical stimulation (NMES) can stimulate contractions in deep lumbar stabilizing muscles. An optimal protocol has not been devised for the activation of these muscles by NMES, and information is lacking regarding an optimal stimulation point on the abdominal wall. The goal was to determine a single optimized stimulation point on the abdominal wall for transcutaneous NMES for the activation of deep lumbar stabilizing muscles. Ultrasound images of the spinal stabilizing muscles were captured during NMES at three sites on the lateral abdominal wall. After an optimal location for the placement of the electrodes was determined, changes in the thickness of the lumbar multifidus (LM) were measured during NMES. Three stimulation points were investigated using 20 healthy physically active male volunteers. A reference point R, 1 cm superior to the iliac crest along the midaxillary line, was used. Three study points were used: stimulation point S1 was located 2 cm superior and 2 cm medial to the anterior superior iliac spine, stimulation point S3 was 2 cm below the lowest rib along the same sagittal plane as S1, and stimulation point S2 was midway between S1 and S3. Sessions were conducted stimulating at S1, S2, or S3 using R for reference. Real-time ultrasound imaging (RUSI) of the abdominal muscles was captured during each stimulation session. In addition, RUSI images were captured of the LM during stimulation at S1. Thickness, as measured by RUSI, of the transverse abdominis (TrA), obliquus internus, and obliquus externus was greater during NMES than at rest for all three study points (p<.05). Transverse abdominis was significantly stimulated more by NMES at S1 than at the other points (p<.05). The LM thickness was also significantly greater during NMES at S1 than at rest (p<.05). Neuromuscular electrical stimulation at S1 optimally activated deep spinal stabilizing muscles, TrA and LM, as evidenced by RUSI. The authors recommend this optimal stimulation point be used for NMES in the course of lumbar spine stabilization training in patients having difficulty initiating contraction of these muscles. Copyright © 2014 Elsevier Inc. All rights reserved.
Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate
NASA Astrophysics Data System (ADS)
Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.
2008-08-01
The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.
Analysis Methodology for Optimal Selection of Ground Station Site in Space Missions
NASA Astrophysics Data System (ADS)
Nieves-Chinchilla, J.; Farjas, M.; Martínez, R.
2013-12-01
Optimization of ground station sites is especially important in complex missions that include several small satellites (clusters or constellations) such as the QB50 project, where one ground station would be able to track several spatial vehicles, even simultaneously. In this regard the design of the communication system has to carefully take into account the ground station site and relevant signal phenomena, depending on the frequency band. To propose the optimal location of the ground station, these aspects become even more relevant to establish a trusted communication link due to the ground segment site in urban areas and/or selection of low orbits for the space segment. In addition, updated cartography with high resolution data of the location and its surroundings help to develop recommendations in the design of its location for spatial vehicles tracking and hence to improve effectiveness. The objectives of this analysis methodology are: completion of cartographic information, modelling the obstacles that hinder communication between the ground and space segment and representation in the generated 3D scene of the degree of impairment in the signal/noise of the phenomena that interferes with communication. The integration of new technologies of geographic data capture, such as 3D Laser Scan, determine that increased optimization of the antenna elevation mask, in its AOS and LOS azimuths along the horizon visible, maximizes visibility time with spatial vehicles. Furthermore, from the three-dimensional cloud of points captured, specific information is selected and, using 3D modeling techniques, the 3D scene of the antenna location site and surroundings is generated. The resulting 3D model evidences nearby obstacles related to the cartographic conditions such as mountain formations and buildings, and any additional obstacles that interfere with the operational quality of the antenna (other antennas and electronic devices that emit or receive in the same bandwidth). To check/test the spatial proposal of the ground station site, this analysis methodology uses mission simulation software of spatial vehicles to analyze and quantify how the geographic accuracy of the position of the spatial vehicles along the horizon visible from the antenna, increases communication time with the ground station. Experimental results that have been obtained from a ground station located at ETSIT-UPM in Spain (QBito Nanosatellite, UPM spacecraft mission within the QB50 project) show that selection of the optimal site increases the field of view from the antenna and hence helps to meet mission requirements.
Numerical Leak Detection in a Pipeline Network of Complex Structure with Unsteady Flow
NASA Astrophysics Data System (ADS)
Aida-zade, K. R.; Ashrafova, E. R.
2017-12-01
An inverse problem for a pipeline network of complex loopback structure is solved numerically. The problem is to determine the locations and amounts of leaks from unsteady flow characteristics measured at some pipeline points. The features of the problem include impulse functions involved in a system of hyperbolic differential equations, the absence of classical initial conditions, and boundary conditions specified as nonseparated relations between the states at the endpoints of adjacent pipeline segments. The problem is reduced to a parametric optimal control problem without initial conditions, but with nonseparated boundary conditions. The latter problem is solved by applying first-order optimization methods. Results of numerical experiments are presented.
Case studies of Induced Earthquakes in Ohio for 2016 and 2017
NASA Astrophysics Data System (ADS)
Friberg, P. A.; Brudzinski, M.; Kozlowska, M.; Loughner, E.; Langenkamp, T.; Dricker, I.
2017-12-01
Over the last four years, unconventional oil and gas production activity in the Utica shale play in Ohio has induced over 20 earthquake sequences (Friberg et al, 2014; Skoumal et al, 2016; Friberg et al, 2016; Kozlowska et al, in submission) including a few new ones in 2017. The majority of the induced events have been attributed to optimally oriented faults located in crystalline basement rocks, which are closer to the Utica formation than the Marcellus shale, a shallower formation more typically targeted in Pennsylvania and West Virginia. A number of earthquake sequences in 2016 and 2017 are examined using multi-station cross correlation template matching techniques. We examine the Gutenberg-Richter b-values and, where possible, the b-value evolution of the earthquake sequences to help determine seismogensis of the events. Refined earthquake locations using HypoDD are determined using data from stations operated by the USGS, IRIS, ODNR, Miami University, and PASEIS.
Nodal failure index approach to groundwater remediation design
Lee, J.; Reeves, H.W.; Dowding, C.H.
2008-01-01
Computer simulations often are used to design and to optimize groundwater remediation systems. We present a new computationally efficient approach that calculates the reliability of remedial design at every location in a model domain with a single simulation. The estimated reliability and other model information are used to select a best remedial option for given site conditions, conceptual model, and available data. To evaluate design performance, we introduce the nodal failure index (NFI) to determine the number of nodal locations at which the probability of success is below the design requirement. The strength of the NFI approach is that selected areas of interest can be specified for analysis and the best remedial design determined for this target region. An example application of the NFI approach using a hypothetical model shows how the spatial distribution of reliability can be used for a decision support system in groundwater remediation design. ?? 2008 ASCE.
Multi-hop path tracing of mobile robot with multi-range image
NASA Astrophysics Data System (ADS)
Choudhury, Ramakanta; Samal, Chandrakanta; Choudhury, Umakanta
2010-02-01
It is well known that image processing depends heavily upon image representation technique . This paper intends to find out the optimal path of mobile robots for a specified area where obstacles are predefined as well as modified. Here the optimal path is represented by using the Quad tree method. Since there has been rising interest in the use of quad tree, we have tried to use the successive subdivision of images into quadrants from which the quad tree is developed. In the quad tree, obstacles-free area and the partial filled area are represented with different notations. After development of quad tree the algorithm is used to find the optimal path by employing neighbor finding technique, with a view to move the robot from the source to destination. The algorithm, here , permeates through the entire tree, and tries to locate the common ancestor for computation. The computation and the algorithm, aim at easing the ability of the robot to trace the optimal path with the help of adjacencies between the neighboring nodes as well as determining such adjacencies in the horizontal, vertical and diagonal directions. In this paper efforts have been made to determine the movement of the adjacent block in the quad tree and to detect the transition between the blocks equal size and finally generate the result.
Theory of Arachnid Prey Localization
NASA Astrophysics Data System (ADS)
Stürzl, W.; Kempter, R.; van Hemmen, J. L.
2000-06-01
Sand scorpions and many other arachnids locate their prey through highly sensitive slit sensilla at the tips (tarsi) of their eight legs. This sensor array responds to vibrations with stimulus-locked action potentials encoding the target direction. We present a neuronal model to account for stimulus angle determination using a population of second-order neurons, each receiving excitatory input from one tarsus and inhibition from a triad opposite to it. The input opens a time window whose width determines a neuron's firing probability. Stochastic optimization is realized through tuning the balance between excitation and inhibition. The agreement with experiments on the sand scorpion is excellent.
Single-tier city logistics model for single product
NASA Astrophysics Data System (ADS)
Saragih, N. I.; Nur Bahagia, S.; Suprayogi; Syabri, I.
2017-11-01
This research develops single-tier city logistics model which consists of suppliers, UCCs, and retailers. The problem that will be answered in this research is how to determine the location of UCCs, to allocate retailers to opened UCCs, to assign suppliers to opened UCCs, to control inventory in the three entities involved, and to determine the route of the vehicles from opened UCCs to retailers. This model has never been developed before. All the decisions will be simultaneously optimized. Characteristic of the demand is probabilistic following a normal distribution, and the number of product is single.
NASA Astrophysics Data System (ADS)
Davis, A. D.; Huan, X.; Heimbach, P.; Marzouk, Y.
2017-12-01
Borehole data are essential for calibrating ice sheet models. However, field expeditions for acquiring borehole data are often time-consuming, expensive, and dangerous. It is thus essential to plan the best sampling locations that maximize the value of data while minimizing costs and risks. We present an uncertainty quantification (UQ) workflow based on rigorous probability framework to achieve these objectives. First, we employ an optimal experimental design (OED) procedure to compute borehole locations that yield the highest expected information gain. We take into account practical considerations of location accessibility (e.g., proximity to research sites, terrain, and ice velocity may affect feasibility of drilling) and robustness (e.g., real-time constraints such as weather may force researchers to drill at sub-optimal locations near those originally planned), by incorporating a penalty reflecting accessibility as well as sensitivity to deviations from the optimal locations. Next, we extract vertical temperature profiles from these boreholes and formulate a Bayesian inverse problem to reconstruct past surface temperatures. Using a model of temperature advection/diffusion, the top boundary condition (corresponding to surface temperatures) is calibrated via efficient Markov chain Monte Carlo (MCMC). The overall procedure can then be iterated to choose new optimal borehole locations for the next expeditions.Through this work, we demonstrate powerful UQ methods for designing experiments, calibrating models, making predictions, and assessing sensitivity--all performed under an uncertain environment. We develop a theoretical framework as well as practical software within an intuitive workflow, and illustrate their usefulness for combining data and models for environmental and climate research.
Extrinsic and intrinsic index finger muscle attachments in an OpenSim upper-extremity model.
Lee, Jong Hwa; Asakawa, Deanna S; Dennerlein, Jack T; Jindrich, Devin L
2015-04-01
Musculoskeletal models allow estimation of muscle function during complex tasks. We used objective methods to determine possible attachment locations for index finger muscles in an OpenSim upper-extremity model. Data-driven optimization algorithms, Simulated Annealing and Hook-Jeeves, estimated tendon locations crossing the metacarpophalangeal (MCP), proximal interphalangeal (PIP) and distal interphalangeal (DIP) joints by minimizing the difference between model-estimated and experimentally-measured moment arms. Sensitivity analysis revealed that multiple sets of muscle attachments with similar optimized moment arms are possible, requiring additional assumptions or data to select a single set of values. The most smooth muscle paths were assumed to be biologically reasonable. Estimated tendon attachments resulted in variance accounted for (VAF) between calculated moment arms and measured values of 78% for flex/extension and 81% for ab/adduction at the MCP joint. VAF averaged 67% at the PIP joint and 54% at the DIP joint. VAF values at PIP and DIP joints partially reflected the constant moment arms reported for muscles about these joints. However, all moment arm values found through optimization were non-linear and non-constant. Relationships between moment arms and joint angles were best described with quadratic equations for tendons at the PIP and DIP joints.
Zeng, Xiaozheng; McGough, Robert J.
2009-01-01
The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640
Optimization of planting pattern plan in Logung irrigation area using linear program
NASA Astrophysics Data System (ADS)
Wardoyo, Wasis; Setyono
2018-03-01
Logung irrigation area is located in Kudus Regency, Central Java Province, Indonesia. Irrigation area with 2810 Ha of extent is getting water supply from Logung dam. Yet, the utilization of water at Logung dam is not optimal and the distribution of water is still not evenly distributed. Therefore, this study will discuss about the optimization of irrigation water utilization based on the beginning of plant season. This optimization begins with the analysis of hydrology, climatology and river discharge in order to determine the irrigation water needs. After determining irrigation water needs, six alternatives of planting patterns with the different early planting periods, i.e. 1st November, 2nd November, 3rd November, 1st December, 2nd December, and 3rd December with the planting pattern of rice-secondary crop-sugarcane is introduced. It is continued by the analysis of water distribution conducted using linear program assisted by POM-Quantity method for Windows 3 with the reliable discharge limit and the available land area. Output of this calculation are to determine the land area that can be planted based on the type of plant and growing season, and to obtaine the profits of harvest yields. Based on the optimum area of each plant species with 6 alternatives, the most optimum area was obtained at the early planting periods on 3rd December with the production profit of Rp 113.397.338.854,- with the planting pattern of rice / beans / sugarcane-rice / beans / sugarcane-beans / sugarcane.
Data on cost-optimal Nearly Zero Energy Buildings (NZEBs) across Europe.
D'Agostino, Delia; Parker, Danny
2018-04-01
This data article refers to the research paper A model for the cost-optimal design of Nearly Zero Energy Buildings (NZEBs) in representative climates across Europe [1]. The reported data deal with the design optimization of a residential building prototype located in representative European locations. The study focus on the research of cost-optimal choices and efficiency measures in new buildings depending on the climate. The data linked within this article relate to the modelled building energy consumption, renewable production, potential energy savings, and costs. Data allow to visualize energy consumption before and after the optimization, selected efficiency measures, costs and renewable production. The reduction of electricity and natural gas consumption towards the NZEB target can be visualized together with incremental and cumulative costs in each location. Further data is available about building geometry, costs, CO 2 emissions, envelope, materials, lighting, appliances and systems.
An approach to solve replacement problems under intuitionistic fuzzy nature
NASA Astrophysics Data System (ADS)
Balaganesan, M.; Ganesan, K.
2018-04-01
Due to impreciseness to solve the day to day problems the researchers use fuzzy sets in their discussions of the replacement problems. The aim of this paper is to solve the replacement theory problems with triangular intuitionistic fuzzy numbers. An effective methodology based on fuzziness index and location index is proposed to determine the optimal solution of the replacement problem. A numerical example is illustrated to validate the proposed method.
Simulation and video animation of canal flushing created by a tide gate
Schoellhamer, David H.
1988-01-01
A tide-gate algorithm was added to a one-dimensional unsteady flow model that was calibrated, verified, and used to determine the locations of as many as five tide gates that would maximize flushing in two canal systems. Results from the flow model were used to run a branched Lagrangian transport model to simulate the flushing of a conservative constituent from the canal systems both with and without tide gates. A tide gate produces a part-time riverine flow through the canal system that improves flushing along the flow path created by the tide gate. Flushing with no tide gates and with a single optimally located tide gate are shown with a video animation.
Saito, Atsushi; Nawano, Shigeru; Shimizu, Akinobu
2017-05-01
This paper addresses joint optimization for segmentation and shape priors, including translation, to overcome inter-subject variability in the location of an organ. Because a simple extension of the previous exact optimization method is too computationally complex, we propose a fast approximation for optimization. The effectiveness of the proposed approximation is validated in the context of gallbladder segmentation from a non-contrast computed tomography (CT) volume. After spatial standardization and estimation of the posterior probability of the target organ, simultaneous optimization of the segmentation, shape, and location priors is performed using a branch-and-bound method. Fast approximation is achieved by combining sampling in the eigenshape space to reduce the number of shape priors and an efficient computational technique for evaluating the lower bound. Performance was evaluated using threefold cross-validation of 27 CT volumes. Optimization in terms of translation of the shape prior significantly improved segmentation performance. The proposed method achieved a result of 0.623 on the Jaccard index in gallbladder segmentation, which is comparable to that of state-of-the-art methods. The computational efficiency of the algorithm is confirmed to be good enough to allow execution on a personal computer. Joint optimization of the segmentation, shape, and location priors was proposed, and it proved to be effective in gallbladder segmentation with high computational efficiency.
Barlow, P.M.; Wagner, B.J.; Belitz, K.
1996-01-01
The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.
Machining fixture layout optimization using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Dou, Jianping; Wang, Xingsong; Wang, Lei
2011-05-01
Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.
NASA Astrophysics Data System (ADS)
Vilhelmsen, Troels N.; Ferré, Ty P. A.
2016-04-01
Hydrological models are often developed to forecasting future behavior in response due to natural or human induced changes in stresses affecting hydrologic systems. Commonly, these models are conceptualized and calibrated based on existing data/information about the hydrological conditions. However, most hydrologic systems lack sufficient data to constrain models with adequate certainty to support robust decision making. Therefore, a key element of a hydrologic study is the selection of additional data to improve model performance. Given the nature of hydrologic investigations, it is not practical to select data sequentially, i.e. to choose the next observation, collect it, refine the model, and then repeat the process. Rather, for timing and financial reasons, measurement campaigns include multiple wells or sampling points. There is a growing body of literature aimed at defining the expected data worth based on existing models. However, these are almost all limited to identifying single additional observations. In this study, we present a methodology for simultaneously selecting multiple potential new observations based on their expected ability to reduce the uncertainty of the forecasts of interest. This methodology is based on linear estimates of the predictive uncertainty, and it can be used to determine the optimal combinations of measurements (location and number) established to reduce the uncertainty of multiple predictions. The outcome of the analysis is an estimate of the optimal sampling locations; the optimal number of samples; as well as a probability map showing the locations within the investigated area that are most likely to provide useful information about the forecasting of interest.
Optimizing Dynamical Network Structure for Pinning Control
NASA Astrophysics Data System (ADS)
Orouskhani, Yasin; Jalili, Mahdi; Yu, Xinghuo
2016-04-01
Controlling dynamics of a network from any initial state to a final desired state has many applications in different disciplines from engineering to biology and social sciences. In this work, we optimize the network structure for pinning control. The problem is formulated as four optimization tasks: i) optimizing the locations of driver nodes, ii) optimizing the feedback gains, iii) optimizing simultaneously the locations of driver nodes and feedback gains, and iv) optimizing the connection weights. A newly developed population-based optimization technique (cat swarm optimization) is used as the optimization method. In order to verify the methods, we use both real-world networks, and model scale-free and small-world networks. Extensive simulation results show that the optimal placement of driver nodes significantly outperforms heuristic methods including placing drivers based on various centrality measures (degree, betweenness, closeness and clustering coefficient). The pinning controllability is further improved by optimizing the feedback gains. We also show that one can significantly improve the controllability by optimizing the connection weights.
Peploe, C; McErlain-Naylor, S A; Harland, A R; King, M A
2018-06-01
Three-dimensional kinematic data of bat and ball were recorded for 239 individual shots performed by twenty batsmen ranging from club to international standard. The impact location of the ball on the bat face was determined and assessed against the resultant instantaneous post-impact ball speed and measures of post-impact bat torsion and ball direction. Significant negative linear relationships were found between post-impact ball speed and the absolute distance of impact from the midline medio-laterally and sweetspot longitudinally. Significant cubic relationships were found between the distance of impact from the midline of the bat medio-laterally and both a measure of bat torsion and the post-impact ball direction. A "sweet region" on the bat face was identified whereby impacts within 2 cm of the sweetspot in the medio-lateral direction, and 4.5 cm in the longitudinal direction, caused reductions in ball speed of less than 6% from the optimal value, and deviations in ball direction of less than 10° from the intended target. This study provides a greater understanding of the margin for error afforded to batsmen, allowing researchers to assess shot success in more detail, and highlights the importance of players generating consistently central impact locations when hitting for optimal performance.
Guzmán-Venegas, Rodrigo A; Araneda, Oscar F; Silvestre, Rony A
2014-12-01
Botulinum toxin (BTX) acts on the neuromuscular junction which can be located by the innervation zone (IZ). Clinically, the motor point (MP) is homologous to the IZ and it is used as the injection site of BTX. Differences in the effectiveness of the application of BTX between MP and IZ locations have been determined. Compare the location of the MP obtained using electrical stimulation and the location of the IZ using a linear surface electrodes array on the biceps brachii muscle. The biceps brachii muscle of twenty men was assessed. The MP was located using the torque measurement generated by electrical stimulation. The IZ was detected using a linear surface electrodes array. A difference between the MP and the IZ positions (75.8 vs. 86.5mm, delta 10.7 mm; p=0.003, post-hoc power 0.89) was observed. The magnitude of the difference between the MP and the IZ may be clinically relevant. The IZ location using surface electromyography as a guide to optimize BTX injection is proposed. Copyright © 2014 Elsevier Ltd. All rights reserved.
A systematic approach for the location of hand sanitizer dispensers in hospitals.
Cure, Laila; Van Enk, Richard; Tiong, Ewing
2014-09-01
Compliance with hand hygiene practices is directly affected by the accessibility and availability of cleaning agents. Nevertheless, the decision of where to locate these dispensers is often not explicitly or fully addressed in the literature. In this paper, we study the problem of selecting the locations to install alcohol-based hand sanitizer dispensers throughout a hospital unit as an indirect approach to maximize compliance with hand hygiene practices. We investigate the relevant criteria in selecting dispenser locations that promote hand hygiene compliance, propose metrics for the evaluation of various location configurations, and formulate a dispenser location optimization model that systematically incorporates such criteria. A complete methodology to collect data and obtain the model parameters is described. We illustrate the proposed approach using data from a general care unit at a collaborating hospital. A cost analysis was performed to study the trade-offs between usability and cost. The proposed methodology can help in evaluating the current location configuration, determining the need for change, and establishing the best possible configuration. It can be adapted to incorporate alternative metrics, tailored to different institutions and updated as needed with new internal policies or safety regulation.
Method and apparatus for precision laser micromachining
Chang, Jim; Warner, Bruce E.; Dragon, Ernest P.
2000-05-02
A method and apparatus for micromachining and microdrilling which results in a machined part of superior surface quality is provided. The system uses a near diffraction limited, high repetition rate, short pulse length, visible wavelength laser. The laser is combined with a high speed precision tilting mirror and suitable beam shaping optics, thus allowing a large amount of energy to be accurately positioned and scanned on the workpiece. As a result of this system, complicated, high resolution machining patterns can be achieved. A cover plate may be temporarily attached to the workpiece. Then as the workpiece material is vaporized during the machining process, the vapors condense on the cover plate rather than the surface of the workpiece. In order to eliminate cutting rate variations as the cutting direction is varied, a randomly polarized laser beam is utilized. A rotating half-wave plate is used to achieve the random polarization. In order to correctly locate the focus at the desired location within the workpiece, the position of the focus is first determined by monitoring the speckle size while varying the distance between the workpiece and the focussing optics. When the speckle size reaches a maximum, the focus is located at the first surface of the workpiece. After the location of the focus has been determined, it is repositioned to the desired location within the workpiece, thus optimizing the quality of the machined area.
Aghamohammadi, Hossein; Saadi Mesgari, Mohammad; Molaei, Damoon; Aghamohammadi, Hasan
2013-01-01
Location-allocation is a combinatorial optimization problem, and is defined as Non deterministic Polynomial Hard (NP) hard optimization. Therefore, solution of such a problem should be shifted from exact to heuristic or Meta heuristic due to the complexity of the problem. Locating medical centers and allocating injuries of an earthquake to them has high importance in earthquake disaster management so that developing a proper method will reduce the time of relief operation and will consequently decrease the number of fatalities. This paper presents the development of a heuristic method based on two nested genetic algorithms to optimize this location allocation problem by using the abilities of Geographic Information System (GIS). In the proposed method, outer genetic algorithm is applied to the location part of the problem and inner genetic algorithm is used to optimize the resource allocation. The final outcome of implemented method includes the spatial location of new required medical centers. The method also calculates that how many of the injuries at each demanding point should be taken to any of the existing and new medical centers as well. The results of proposed method showed high performance of designed structure to solve a capacitated location-allocation problem that may arise in a disaster situation when injured people has to be taken to medical centers in a reasonable time.
A new approach to the tradeoff between quality and accessibility of health care.
Tanke, Marit A C; Ikkersheim, David E
2012-05-01
Quality of care is associated with patient volume. Regionalization of care is therefore one of the approaches that is suited to improve quality of care. A disadvantage of regionalization is that the accessibility of the facilities can decrease. By investigating the tradeoff between quality and accessibility it is possible to determine the optimal amount of treatment locations in a health care system. In this article we present a new model to quantitatively 'solve' this tradeoff. We use the condition breast cancer in the Netherlands as an example. We calculated the expected quality gains in Quality Adjusted Lifetime Years (QALY's) due to stepwise regionalization using 'volume-outcome' literature for breast cancer. Decreased accessibility was operationalized as increased (travel) costs due to regionalization by using demographic data, drive-time information, and the national median income. The total sum of the quality and accessibility function determines the optimum range of treatment locations for this particular condition, given the 'volume-quality' relationship and Dutch demographics and geography. Currently, 94 locations offer breast cancer treatment in the Netherlands. Our model estimates that the optimum range of treatment locations for this particular condition in the Netherlands varies from 15 locations to 44 locations. Our study shows that the Dutch society would benefit from regionalization of breast cancer care as possible quality gains outweigh heightened travel costs. In addition, this model can be used for other medical conditions and in other countries. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
LED light design method for high contrast and uniform illumination imaging in machine vision.
Wu, Xiaojun; Gao, Guangming
2018-03-01
In machine vision, illumination is very critical to determine the complexity of the inspection algorithms. Proper lights can obtain clear and sharp images with the highest contrast and low noise between the interested object and the background, which is conducive to the target being located, measured, or inspected. Contrary to the empirically based trial-and-error convention to select the off-the-shelf LED light in machine vision, an optimization algorithm for LED light design is proposed in this paper. It is composed of the contrast optimization modeling and the uniform illumination technology for non-normal incidence (UINI). The contrast optimization model is built based on the surface reflection characteristics, e.g., the roughness, the reflective index, and light direction, etc., to maximize the contrast between the features of interest and the background. The UINI can keep the uniformity of the optimized lighting by the contrast optimization model. The simulation and experimental results demonstrate that the optimization algorithm is effective and suitable to produce images with the highest contrast and uniformity, which is very inspirational to the design of LED illumination systems in machine vision.
NASA Astrophysics Data System (ADS)
Kolosionis, Konstantinos; Papadopoulou, Maria P.
2017-04-01
Monitoring networks provide essential information for water resources management especially in areas with significant groundwater exploitation due to extensive agricultural activities. In this work, a simulation-optimization framework is developed based on heuristic optimization methodologies and geostatistical modeling approaches to obtain an optimal design for a groundwater quality monitoring network. Groundwater quantity and quality data obtained from 43 existing observation locations at 3 different hydrological periods in Mires basin in Crete, Greece will be used in the proposed framework in terms of Regression Kriging to develop the spatial distribution of nitrates concentration in the aquifer of interest. Based on the existing groundwater quality mapping, the proposed optimization tool will determine a cost-effective observation wells network that contributes significant information to water managers and authorities. The elimination of observation wells that add little or no beneficial information to groundwater level and quality mapping of the area can be obtain using estimations uncertainty and statistical error metrics without effecting the assessment of the groundwater quality. Given the high maintenance cost of groundwater monitoring networks, the proposed tool could used by water regulators in the decision-making process to obtain a efficient network design that is essential.
NASA Astrophysics Data System (ADS)
Kuhn, A. M.; Fennel, K.; Bianucci, L.
2016-02-01
A key feature of the North Atlantic Ocean's biological dynamics is the annual phytoplankton spring bloom. In the region comprising the continental shelf and adjacent deep ocean of the northwest North Atlantic, we identified two patterns of bloom development: 1) locations with cold temperatures and deep winter mixed layers, where the spring bloom peaks around April and the annual chlorophyll cycle has a large amplitude, and 2) locations with warmer temperatures and shallow winter mixed layers, where the spring bloom peaks earlier in the year, sometimes indiscernible from the fall bloom. These patterns result from a combination of limiting environmental factors and interactions among planktonic groups with different optimal requirements. Simple models that represent the ecosystem with a single phytoplankton (P) and a single zooplankton (Z) group are challenged to reproduce these ecological interactions. Here we investigate the effect that added complexity has on determining spatio-temporal chlorophyll. We compare two ecosystem models, one that contains one P and one Z group, and one with two P and three Z groups. We consider three types of changes in complexity: 1) added dependencies among variables (e.g., temperature dependent rates), 2) modified structural pathways, and 3) added pathways. Subsets of the most sensitive parameters are optimized in each model to replicate observations in the region. For computational efficiency, the parameter optimization is performed using 1D surrogates of a 3D model. We evaluate how model complexity affects model skill, and whether the optimized parameter sets found for each model modify the interpretation of ecosystem functioning. Spatial differences in the parameter sets that best represent different areas hint at the existence of different ecological communities or at physical-biological interactions that are not represented in the simplest model. Our methodology emphasizes the combined use of observations, 1D models to help identifying patterns, and 3D models able to simulate the environment modre realistically, as a means to acquire predictive understanding of the ocean's ecology.
Real-time inverse planning for Gamma Knife radiosurgery.
Wu, Q Jackie; Chankong, Vira; Jitprapaikulsarn, Suradet; Wessels, Barry W; Einstein, Douglas B; Mathayomchan, Boonyanit; Kinsella, Timothy J
2003-11-01
The challenges of real-time Gamma Knife inverse planning are the large number of variables involved and the unknown search space a priori. With limited collimator sizes, shots have to be heavily overlapped to form a smooth prescription isodose line that conforms to the irregular target shape. Such overlaps greatly influence the total number of shots per plan, making pre-determination of the total number of shots impractical. However, this total number of shots usually defines the search space, a pre-requisite for most of the optimization methods. Since each shot only covers part of the target, a collection of shots in different locations and various collimator sizes selected makes up the global dose distribution that conforms to the target. Hence, planning or placing these shots is a combinatorial optimization process that is computationally expensive by nature. We have previously developed a theory of shot placement and optimization based on skeletonization. The real-time inverse planning process, reported in this paper, is an expansion and the clinical implementation of this theory. The complete planning process consists of two steps. The first step is to determine an optimal number of shots including locations and sizes and to assign initial collimator size to each of the shots. The second step is to fine-tune the weights using a linear-programming technique. The objective function is to minimize the total dose to the target boundary (i.e., maximize the dose conformity). Results of an ellipsoid test target and ten clinical cases are presented. The clinical cases are also compared with physician's manual plans. The target coverage is more than 99% for manual plans and 97% for all the inverse plans. The RTOG PITV conformity indices for the manual plans are between 1.16 and 3.46, compared to 1.36 to 2.4 for the inverse plans. All the inverse plans are generated in less than 2 min, making real-time inverse planning a reality.
DOT National Transportation Integrated Search
2002-11-01
This paper develops an algorithm for optimally locating surveillance technologies with an emphasis on Automatic Vehicle Identification tag readers by maximizing the benefit that would accrue from measuring travel times on a transportation network. Th...
NASA Technical Reports Server (NTRS)
Buchholz, R. E.
1972-01-01
The results are presented that were obtained from a wind tunnel tests to improve space shuttle booster baseline lateral-directional stability, control characteristics, and cruise engine location optimization. Tests were conducted in a 7 x 10-foot transonic wind tunnel. The model employed was a 0.015-scale replica of a space shuttle booster. The three major objectives of this test were to determine the following: (1) force, static stability, and control effectiveness characteristics for varying angles of positive and negative wing dihedral and various combinations of wing tip and centerline dorsal fins; (2) force and static stability characteristics of cruise engines location on the body below the high aerodynamic canard; and (3) control effectiveness for the low-mounted wing configuration. The wing dihedral study was conducted at a cruise Mach number of 0.40 and simulated altitude of 10,000 feet. Portions of the test were conducted to determine the control surfaces stability and control characteristics over the Mach number range of 0.4 to 1.2. The aerodynamic characteristics determined are based on a unit Reynolds number of approximately 2 million per foot. Boundary layer trip strips were employed to induce boundary layer transition.
A Novel Space Partitioning Algorithm to Improve Current Practices in Facility Placement
Jimenez, Tamara; Mikler, Armin R; Tiwari, Chetan
2012-01-01
In the presence of naturally occurring and man-made public health threats, the feasibility of regional bio-emergency contingency plans plays a crucial role in the mitigation of such emergencies. While the analysis of in-place response scenarios provides a measure of quality for a given plan, it involves human judgment to identify improvements in plans that are otherwise likely to fail. Since resource constraints and government mandates limit the availability of service provided in case of an emergency, computational techniques can determine optimal locations for providing emergency response assuming that the uniform distribution of demand across homogeneous resources will yield and optimal service outcome. This paper presents an algorithm that recursively partitions the geographic space into sub-regions while equally distributing the population across the partitions. For this method, we have proven the existence of an upper bound on the deviation from the optimal population size for sub-regions. PMID:23853502
Defining defect specifications to optimize photomask production and requalification
NASA Astrophysics Data System (ADS)
Fiekowsky, Peter
2006-10-01
Reducing defect repairs and accelerating defect analysis is becoming more important as the total cost of defect repairs on advanced masks increases. Photomask defect specs based on printability, as measured on AIMS microscopes has been used for years, but the fundamental defect spec is still the defect size, as measured on the photomask, requiring the repair of many unprintable defects. ADAS, the Automated Defect Analysis System from AVI is now available in most advanced mask shops. It makes the use of pure printability specs, or "Optimal Defect Specs" practical. This software uses advanced algorithms to eliminate false defects caused by approximations in the inspection algorithm, classify each defect, simulate each defect and disposition each defect based on its printability and location. This paper defines "optimal defect specs", explains why they are now practical and economic, gives a method of determining them and provides accuracy data.
Grist, Eric P M; Flegg, Jennifer A; Humphreys, Georgina; Mas, Ignacio Suay; Anderson, Tim J C; Ashley, Elizabeth A; Day, Nicholas P J; Dhorda, Mehul; Dondorp, Arjen M; Faiz, M Abul; Gething, Peter W; Hien, Tran T; Hlaing, Tin M; Imwong, Mallika; Kindermans, Jean-Marie; Maude, Richard J; Mayxay, Mayfong; McDew-White, Marina; Menard, Didier; Nair, Shalini; Nosten, Francois; Newton, Paul N; Price, Ric N; Pukrittayakamee, Sasithon; Takala-Harrison, Shannon; Smithuis, Frank; Nguyen, Nhien T; Tun, Kyaw M; White, Nicholas J; Witkowski, Benoit; Woodrow, Charles J; Fairhurst, Rick M; Sibley, Carol Hopkins; Guerin, Philippe J
2016-10-24
Artemisinin-resistant Plasmodium falciparum malaria parasites are now present across much of mainland Southeast Asia, where ongoing surveys are measuring and mapping their spatial distribution. These efforts require substantial resources. Here we propose a generic 'smart surveillance' methodology to identify optimal candidate sites for future sampling and thus map the distribution of artemisinin resistance most efficiently. The approach uses the 'uncertainty' map generated iteratively by a geostatistical model to determine optimal locations for subsequent sampling. The methodology is illustrated using recent data on the prevalence of the K13-propeller polymorphism (a genetic marker of artemisinin resistance) in the Greater Mekong Subregion. This methodology, which has broader application to geostatistical mapping in general, could improve the quality and efficiency of drug resistance mapping and thereby guide practical operations to eliminate malaria in affected areas.
Evaluation of ultrasonics and optimized radiography for 2219-T87 aluminum weldments
NASA Technical Reports Server (NTRS)
Clotfelter, W. N.; Hoop, J. M.; Duren, P. C.
1975-01-01
Ultrasonic studies are described which are specifically directed toward the quantitative measurement of randomly located defects previously found in aluminum welds with radiography or with dye penetrants. Experimental radiographic studies were also made to optimize techniques for welds of the thickness range to be used in fabricating the External Tank of the Space Shuttle. Conventional and innovative ultrasonic techniques were applied to the flaw size measurement problem. Advantages and disadvantages of each method are discussed. Flaw size data obtained ultrasonically were compared to radiographic data and to real flaw sizes determined by destructive measurements. Considerable success was achieved with pulse echo techniques and with 'pitch and catch' techniques. The radiographic work described demonstrates that careful selection of film exposure parameters for a particular application must be made to obtain optimized flaw detectability. Thus, film exposure techniques can be improved even though radiography is an old weld inspection method.
Aerodynamic configuration design using response surface methodology analysis
NASA Technical Reports Server (NTRS)
Engelund, Walter C.; Stanley, Douglas O.; Lepsch, Roger A.; Mcmillin, Mark M.; Unal, Resit
1993-01-01
An investigation has been conducted to determine a set of optimal design parameters for a single-stage-to-orbit reentry vehicle. Several configuration geometry parameters which had a large impact on the entry vehicle flying characteristics were selected as design variables: the fuselage fineness ratio, the nose to body length ratio, the nose camber value, the wing planform area scale factor, and the wing location. The optimal geometry parameter values were chosen using a response surface methodology (RSM) technique which allowed for a minimum dry weight configuration design that met a set of aerodynamic performance constraints on the landing speed, and on the subsonic, supersonic, and hypersonic trim and stability levels. The RSM technique utilized, specifically the central composite design method, is presented, along with the general vehicle conceptual design process. Results are presented for an optimized configuration along with several design trade cases.
Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan
2016-01-01
Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method.
Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan
2016-01-01
Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method. PMID:27127499
Optimization of Geothermal Well Placement under Geological Uncertainty
NASA Astrophysics Data System (ADS)
Schulte, Daniel O.; Arnold, Dan; Demyanov, Vasily; Sass, Ingo; Geiger, Sebastian
2017-04-01
Well placement optimization is critical to commercial success of geothermal projects. However, uncertainties of geological parameters prohibit optimization based on a single scenario of the subsurface, particularly when few expensive wells are to be drilled. The optimization of borehole locations is usually based on numerical reservoir models to predict reservoir performance and entails the choice of objectives to optimize (total enthalpy, minimum enthalpy rate, production temperature) and the development options to adjust (well location, pump rate, difference in production and injection temperature). Optimization traditionally requires trying different development options on a single geological realization yet there are many possible different interpretations possible. Therefore, we aim to optimize across a range of representative geological models to account for geological uncertainty in geothermal optimization. We present an approach that uses a response surface methodology based on a large number of geological realizations selected by experimental design to optimize the placement of geothermal wells in a realistic field example. A large number of geological scenarios and design options were simulated and the response surfaces were constructed using polynomial proxy models, which consider both geological uncertainties and design parameters. The polynomial proxies were validated against additional simulation runs and shown to provide an adequate representation of the model response for the cases tested. The resulting proxy models allow for the identification of the optimal borehole locations given the mean response of the geological scenarios from the proxy (i.e. maximizing or minimizing the mean response). The approach is demonstrated on the realistic Watt field example by optimizing the borehole locations to maximize the mean heat extraction from the reservoir under geological uncertainty. The training simulations are based on a comprehensive semi-synthetic data set of a hierarchical benchmark case study for a hydrocarbon reservoir, which specifically considers the interpretational uncertainty in the modeling work flow. The optimal choice of boreholes prolongs the time to cold water breakthrough and allows for higher pump rates and increased water production temperatures.
Optimization methods for decision making in disease prevention and epidemic control.
Deng, Yan; Shen, Siqian; Vorobeychik, Yevgeniy
2013-11-01
This paper investigates problems of disease prevention and epidemic control (DPEC), in which we optimize two sets of decisions: (i) vaccinating individuals and (ii) closing locations, given respective budgets with the goal of minimizing the expected number of infected individuals after intervention. The spread of diseases is inherently stochastic due to the uncertainty about disease transmission and human interaction. We use a bipartite graph to represent individuals' propensities of visiting a set of location, and formulate two integer nonlinear programming models to optimize choices of individuals to vaccinate and locations to close. Our first model assumes that if a location is closed, its visitors stay in a safe location and will not visit other locations. Our second model incorporates compensatory behavior by assuming multiple behavioral groups, always visiting the most preferred locations that remain open. The paper develops algorithms based on a greedy strategy, dynamic programming, and integer programming, and compares the computational efficacy and solution quality. We test problem instances derived from daily behavior patterns of 100 randomly chosen individuals (corresponding to 195 locations) in Portland, Oregon, and provide policy insights regarding the use of the two DPEC models. Copyright © 2013 Elsevier Inc. All rights reserved.
Optimizing Sensor and Actuator Arrays for ASAC Noise Control
NASA Technical Reports Server (NTRS)
Palumbo, Dan; Cabell, Ran
2000-01-01
This paper summarizes the development of an approach to optimizing the locations for arrays of sensors and actuators in active noise control systems. A type of directed combinatorial search, called Tabu Search, is used to select an optimal configuration from a much larger set of candidate locations. The benefit of using an optimized set is demonstrated. The importance of limiting actuator forces to realistic levels when evaluating the cost function is discussed. Results of flight testing an optimized system are presented. Although the technique has been applied primarily to Active Structural Acoustic Control systems, it can be adapted for use in other active noise control implementations.
Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem
NASA Astrophysics Data System (ADS)
Skakov, E. S.; Malysh, V. N.
2018-03-01
The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.
Optimal siting of solid waste-to-value-added facilities through a GIS-based assessment.
Khan, Md Mohib-Ul-Haque; Vaezi, Mahdi; Kumar, Amit
2018-01-01
Siting a solid waste conversion facility requires an assessment of solid waste availability as well as ensuring compliance with environmental, social, and economic factors. The main idea behind this study was to develop a methodology to locate suitable locations for waste conversion facilities considering waste availability as well as environmental and social constraints. A geographic information system (GIS) spatial analysis was used to identify the most suitable areas and to screen out unsuitable lands. The analytic hierarchy process (AHP) was used for a multi-criteria evaluation of relative preferences of different environmental and social factors. A case study was conducted for Alberta, a western province in Canada, by performing a province-wide waste availability assessment. The total available waste considered in this study was 4,077,514tonnes/year for 19 census divisions collected from 79 landfills. Finally, a location-allocation analysis was performed to determine suitable locations for 10 waste conversion facilities across the province. Copyright © 2017 Elsevier B.V. All rights reserved.
Let the pigeon drive the bus: pigeons can plan future routes in a room.
Gibson, Brett; Wilkinson, Matthew; Kelly, Debbie
2012-05-01
The task of determining an optimal route to several locations is called the traveling salesperson problem (TSP). The TSP has been used recently to examine spatial cognition in humans and non-human animals. It remains unclear whether or not the decision process of animals other than non-human primates utilizes rigid rule-based heuristics, or whether non-human animals are able to flexibly 'plan' future routes/behavior based on their knowledge of multiple locations. We presented pigeons in a One-way and Round-Trip group with TSPs that included two or three destinations (feeders) in a laboratory environment. The pigeons departed a start location, traveled to each feeder once before returning to a final destination. Pigeons weighed the proximity of the next location heavily, but appeared to plan ahead multiple steps when the travel costs for inefficient behavior appeared to increase. The results provide clear and strong evidence that animals other than primates are capable of planning sophisticated travel routes.
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Stinson, Craig A; Xia, Yu
2016-06-21
Tandem mass spectrometry (MS/MS) coupled with soft ionization is established as an essential platform for lipid analysis; however, determining high order structural information, such as the carbon-carbon double bond (C[double bond, length as m-dash]C) location, remains challenging. Recently, our group demonstrated a method for sensitive and confident lipid C[double bond, length as m-dash]C location determination by coupling online the Paternò-Büchi (PB) reaction with nanoelectrospray ionization (nanoESI) and MS/MS. Herein, we aimed to expand the scope of the PB reaction for lipid analysis by enabling the reaction with infusion ESI-MS/MS at much higher flow rates than demonstrated in the nanoESI setup (∼20 nL min(-1)). In the new design, the PB reaction was effected in a fused silica capillary solution transfer line, which also served as a microflow UV reactor, prior to ESI. This setup allowed PB reaction optimization and kinetics studies. Under optimized conditions, a maximum of 50% PB reaction yield could be achieved for a standard glycerophosphocholine (PC) within 6 s of UV exposure over a wide flow rate range (0.1-10 μL min(-1)). A solvent composition of 7 : 3 acetone : H2O (with 1% acid or base modifier) allowed the highest PB yields and good lipid ionization, while lower yields were obtained with an addition of a variety of organic solvents. Radical induced lipid peroxidation was identified to induce undesirable side reactions, which could be effectively suppressed by eliminating trace oxygen in the solution via N2 purge. Finally, the utility of coupling the PB reaction with infusion ESI-MS/MS was demonstrated by analyzing a yeast polar lipid extract where C[double bond, length as m-dash]C bond locations were revealed for 35 glycerophospholipids (GPs).
Non-parametric early seizure detection in an animal model of temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.
2008-03-01
The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.
Cai, Hao; Long, Weiding; Li, Xianting; Kong, Lingjuan; Xiong, Shuang
2010-06-15
In case hazardous contaminants are suddenly released indoors, the prompt and proper emergency responses are critical to protect occupants. This paper aims to provide a framework for determining the optimal combination of ventilation and evacuation strategies by considering the uncertainty of source locations. The certainty of source locations is classified as complete certainty, incomplete certainty, and complete uncertainty to cover all the possible situations. According to this classification, three types of decision analysis models are presented. A new concept, efficiency factor of contaminant source (EFCS), is incorporated in these models to evaluate the payoffs of the ventilation and evacuation strategies. A procedure of decision-making based on these models is proposed and demonstrated by numerical studies of one hundred scenarios with ten ventilation modes, two evacuation modes, and five source locations. The results show that the models can be useful to direct the decision analysis of both the ventilation and evacuation strategies. In addition, the certainty of the source locations has an important effect on the outcomes of the decision-making. Copyright 2010 Elsevier B.V. All rights reserved.
Succession of hide–seek and pursuit–evasion at heterogeneous locations
Gal, Shmuel; Casas, Jérôme
2014-01-01
Many interactions between searching agents and their elusive targets are composed of a succession of steps, whether in the context of immune systems, predation or counterterrorism. In the simplest case, a two-step process starts with a search-and-hide phase, also called a hide-and-seek phase, followed by a round of pursuit–escape. Our aim is to link these two processes, usually analysed separately and with different models, in a single game theory context. We define a matrix game in which a searcher looks at a fixed number of discrete locations only once each searching for a hider, which can escape with varying probabilities according to its location. The value of the game is the overall probability of capture after k looks. The optimal search and hide strategies are described. If a searcher looks only once into any of the locations, an optimal hider chooses it's hiding place so as to make all locations equally attractive. This optimal strategy remains true as long as the number of looks is below an easily calculated threshold; however, above this threshold, the optimal position for the hider is where it has the highest probability of escaping once spotted. PMID:24621817
Calzi, Sergio Li; Kent, David L.; Chang, Kyung-Hee; Padgett, Kyle R.; Afzal, Aqeela; Chandra, Saurav B.; Caballero, Sergio; English, Denis; Garlington, Wendy; Hiscott, Paul S.; Sheridan, Carl M.; Grant, Maria B.; Forder, John R.
2013-01-01
Precise localization of exogenously delivered stem cells is critical to our understanding of their reparative response. Our current inability to determine the exact location of small numbers of cells may hinder optimal development of these cells for clinical use. We describe a method using magnetic resonance imaging to track and localize small numbers of stem cells following transplantation. Endothelial progenitor cells (EPC) were labeled with monocrystalline iron oxide nanoparticles (MIONs) which neither adversely altered their viability nor their ability to migrate in vitro and allowed successful detection of limited numbers of these cells in muscle. MION-labeled stem cells were also injected into the vitreous cavity of mice undergoing the model of choroidal neovascularization, laser rupture of Bruch’s membrane. Migration of the MION-labeled cells from the injection site towards the laser burns was visualized by MRI. In conclusion, MION labeling of EPC provides a non-invasive means to define the location of small numbers of these cells. Localization of these cells following injection is critical to their optimization for therapy. PMID:19345699
Vehicle systems design optimization study
NASA Technical Reports Server (NTRS)
Gilmour, J. L.
1980-01-01
The optimum vehicle configuration and component locations are determined for an electric drive vehicle based on using the basic structure of a current production subcompact vehicle. The optimization of an electric vehicle layout requires a weight distribution in the range of 53/47 to 62/38 in order to assure dynamic handling characteristics comparable to current internal combustion engine vehicles. Necessary modification of the base vehicle can be accomplished without major modification of the structure or running gear. As long as batteries are as heavy and require as much space as they currently do, they must be divided into two packages, one at front under the hood and a second at the rear under the cargo area, in order to achieve the desired weight distribution. The weight distribution criteria requires the placement of batteries at the front of the vehicle even when the central tunnel is used for the location of some batteries. The optimum layout has a front motor and front wheel drive. This configuration provides the optimum vehicle dynamic handling characteristics and the maximum passenger and cargo space for a given size vehicle.
Structural Analysis Using Computer Based Methods
NASA Technical Reports Server (NTRS)
Dietz, Matthew R.
2013-01-01
The stiffness of a flex hose that will be used in the umbilical arms of the Space Launch Systems mobile launcher needed to be determined in order to properly qualify ground umbilical plate behavior during vehicle separation post T-0. This data is also necessary to properly size and design the motors used to retract the umbilical arms. Therefore an experiment was created to determine the stiffness of the hose. Before the test apparatus for the experiment could be built, the structure had to be analyzed to ensure it would not fail under given loading conditions. The design model was imported into the analysis software and optimized to decrease runtime while still providing accurate restlts and allow for seamless meshing. Areas exceeding the allowable stresses in the structure were located and modified before submitting the design for fabrication. In addition, a mock up of a deep space habitat and the support frame was designed and needed to be analyzed for structural integrity under different loading conditions. The load cases were provided by the customer and were applied to the structure after optimizing the geometry. Once again, weak points in the structure were located and recommended design changes were made to the customer and the process was repeated until the load conditions were met without exceeding the allowable stresses. After the stresses met the required factors of safety the designs were released for fabrication.
Predicting bacteriophage proteins located in host cell with feature selection technique.
Ding, Hui; Liang, Zhi-Yong; Guo, Feng-Biao; Huang, Jian; Chen, Wei; Lin, Hao
2016-04-01
A bacteriophage is a virus that can infect a bacterium. The fate of an infected bacterium is determined by the bacteriophage proteins located in the host cell. Thus, reliably identifying bacteriophage proteins located in the host cell is extremely important to understand their functions and discover potential anti-bacterial drugs. Thus, in this paper, a computational method was developed to recognize bacteriophage proteins located in host cells based only on their amino acid sequences. The analysis of variance (ANOVA) combined with incremental feature selection (IFS) was proposed to optimize the feature set. Using a jackknife cross-validation, our method can discriminate between bacteriophage proteins located in a host cell and the bacteriophage proteins not located in a host cell with a maximum overall accuracy of 84.2%, and can further classify bacteriophage proteins located in host cell cytoplasm and in host cell membranes with a maximum overall accuracy of 92.4%. To enhance the value of the practical applications of the method, we built a web server called PHPred (〈http://lin.uestc.edu.cn/server/PHPred〉). We believe that the PHPred will become a powerful tool to study bacteriophage proteins located in host cells and to guide related drug discovery. Copyright © 2016 Elsevier Ltd. All rights reserved.
MARS Science Laboratory Post-Landing Location Estimation Using Post2 Trajectory Simulation
NASA Technical Reports Server (NTRS)
Davis, J. L.; Shidner, Jeremy D.; Way, David W.
2013-01-01
The Mars Science Laboratory (MSL) Curiosity rover landed safely on Mars August 5th, 2012 at 10:32 PDT, Earth Received Time. Immediately following touchdown confirmation, best estimates of position were calculated to assist in determining official MSL locations during entry, descent and landing (EDL). Additionally, estimated balance mass impact locations were provided and used to assess how predicted locations compared to actual locations. For MSL, the Program to Optimize Simulated Trajectories II (POST2) was the primary trajectory simulation tool used to predict and assess EDL performance from cruise stage separation through rover touchdown and descent stage impact. This POST2 simulation was used during MSL operations for EDL trajectory analyses in support of maneuver decisions and imaging MSL during EDL. This paper presents the simulation methodology used and results of pre/post-landing MSL location estimates and associated imagery from Mars Reconnaissance Orbiter s (MRO) High Resolution Imaging Science Experiment (HiRISE) camera. To generate these estimates, the MSL POST2 simulation nominal and Monte Carlo data, flight telemetry from onboard navigation, relay orbiter positions from MRO and Mars Odyssey and HiRISE generated digital elevation models (DEM) were utilized. A comparison of predicted rover and balance mass location estimations against actual locations are also presented.
Two-phase simulation-based location-allocation optimization of biomass storage distribution
USDA-ARS?s Scientific Manuscript database
This study presents a two-phase simulation-based framework for finding the optimal locations of biomass storage facilities that is a very critical link on the biomass supply chain, which can help to solve biorefinery concerns (e.g. steady supply, uniform feedstock properties, stable feedstock costs,...
A Comparison of Risk Sensitive Path Planning Methods for Aircraft Emergency Landing
NASA Technical Reports Server (NTRS)
Meuleau, Nicolas; Plaunt, Christian; Smith, David E.; Smith, Tristan
2009-01-01
Determining the best site to land a damaged aircraft presents some interesting challenges for standard path planning techniques. There are multiple possible locations to consider, the space is 3-dimensional with dynamics, the criteria for a good path is determined by overall risk rather than distance or time, and optimization really matters, since an improved path corresponds to greater expected survival rate. We have investigated a number of different path planning methods for solving this problem, including cell decomposition, visibility graphs, probabilistic road maps (PRMs), and local search techniques. In their pure form, none of these techniques have proven to be entirely satisfactory - some are too slow or unpredictable, some produce highly non-optimal paths or do not find certain types of paths, and some do not cope well with the dynamic constraints when controllability is limited. In the end, we are converging towards a hybrid technique that involves seeding a roadmap with a layered visibility graph, using PRM to extend that roadmap, and using local search to further optimize the resulting paths. We describe the techniques we have investigated, report on our experiments with these techniques, and discuss when and why various techniques were unsatisfactory.
Optimal pitching axis location of flapping wings for efficient hovering flight.
Wang, Q; Goosen, J F L; van Keulen, F
2017-09-01
Flapping wings can pitch passively about their pitching axes due to their flexibility, inertia, and aerodynamic loads. A shift in the pitching axis location can dynamically alter the aerodynamic loads, which in turn changes the passive pitching motion and the flight efficiency. Therefore, it is of great interest to investigate the optimal pitching axis for flapping wings to maximize the power efficiency during hovering flight. In this study, flapping wings are modeled as rigid plates with non-uniform mass distribution. The wing flexibility is represented by a linearly torsional spring at the wing root. A predictive quasi-steady aerodynamic model is used to evaluate the lift generated by such wings. Two extreme power consumption scenarios are modeled for hovering flight, i.e. the power consumed by a drive system with and without the capacity of kinetic energy recovery. For wings with different shapes, the optimal pitching axis location is found such that the cycle-averaged power consumption during hovering flight is minimized. Optimization results show that the optimal pitching axis is located between the leading edge and the mid-chord line, which shows close resemblance to insect wings. An optimal pitching axis can save up to 33% of power during hovering flight when compared to traditional wings used by most of flapping wing micro air vehicles (FWMAVs). Traditional wings typically use the straight leading edge as the pitching axis. With the optimized pitching axis, flapping wings show higher pitching amplitudes and start the pitching reversals in advance of the sweeping reversals. These phenomena lead to higher lift-to-drag ratios and, thus, explain the lower power consumption. In addition, the optimized pitching axis provides the drive system higher potential to recycle energy during the deceleration phases as compared to their counterparts. This observation underlines the particular importance of the wing pitching axis location for energy-efficient FWMAVs when using kinetic energy recovery drive systems.
Feasibility Studies for a Mediterranean Neutrino Observatory - The NEMO.RD Project
NASA Astrophysics Data System (ADS)
de Marzo, C.; Ambriola, M.; Bellotti, R.; Cafagna, F.; Calicchio, M.; Ciacio, F.; Circella, M.; de Marzo, C.; Montaruli, T.; Falchieri, D.; Gabrielli, A.; Gandolfi, E.; Masetti, M.; Vitullo, C.; Zanarini, G.; Habel, R.; Usai, I.; Aiello, S.; Burrafato, G.; Caponetto, L.; Costanzo, E.; Lopresti, D.; Pappalardo, L.; Petta, C.; Randazzo, N.; Russo, G. V.; Troia, O.; Barnà, R.; D'Amico, V.; de Domenico, E.; de Pasquale, D.; Giacobbe, S.; Italiano, A.; Migliardo, F.; Salvato, G.; Trafirò, A.; Trimarchi, M.; Ameli, F.; Bonori, M.; Bottai, S.; Capone, A.; Desiati, P.; Massa, F.; Masullo, R.; Salusti, E.; Vicini, M.; Coniglione, R.; Migneco, E.; Piattelli, P.; Riccobene, R.; Sapienza, P.; Cordelli, M.; Trasatti, L.; Valente, V.; de Marchis, G.; Piccari, L.; Accerboni, E.; Mosetti, R.; Astraldi, M.; Gasparini, G. P.; Ulzega, A.; Orrù, P.
2000-06-01
The NEMO.RD Project is a feasibility study of a km3 underwater telescope for high energy astrophysical neutrinos to be located in the Mediterranea Sea. At present this study concerns: i) Monte Carlo simulation study of the capabilities of various arrays of phototubes in order to determine the detector geometry that can optimize performance and cost; ii) design of low power consumption electronic cards for data acquisition and transmission to shore; iii) feasibility study of mechanics, deployment, connection and maintenance of such a detector in collaboration with petrol industries having experience of undersea operations; iv) oceanographic exploration of various sites in search for the optimal one. A brief report on the status of points i) and iv) is presented here
Osterman, Michael; Claiborne, Tina; Liberi, Victor
2018-04-01
Sudden cardiac arrest is the leading cause of death among young athletes. According to the American Heart Association, an automated external defibrillator (AED) should be available within a 1- to 1.5-minute brisk walk from the patient for the highest chance of survival. Secondary school personnel have reported a lack of understanding about the proper number and placement of AEDs for optimal patient care. To determine whether fixed AEDs were located within a 1- to 1.5-minute timeframe from any location on secondary school property (ie, radius of care). Cross-sectional study. Public and private secondary schools in northwest Ohio and southeast Michigan. Thirty schools (24 public, 6 private) volunteered. Global positioning system coordinates were used to survey the entire school properties and determine AED locations. From each AED location, the radius of care was calculated for 3 retrieval speeds: walking, jogging, and driving a utility vehicle. Data were analyzed to expose any property area that fell outside the radius of care. Public schools (37.1% ± 11.0%) possessed more property outside the radius of care than did private schools (23.8% ± 8.0%; F 1,28 = 8.35, P = .01). After accounting for retrieval speed, we still observed differences between school types when personnel would need to walk or jog to retrieve an AED ( F 1.48,41.35 = 4.99, P = .02). The percentages of school property outside the radius of care for public and private schools were 72.6% and 56.3%, respectively, when walking and 34.4% and 12.2%, respectively, when jogging. Only 4.2% of the public and none of the private schools had property outside the radius of care when driving a utility vehicle. Schools should strategically place AEDs to decrease the percentage of property area outside the radius of care. In some cases, placement in a centralized location that is publicly accessible may be more important than the overall number of AEDs on site.
The research of a solution on locating optimally a station for seismic disasters rescue in a city
NASA Astrophysics Data System (ADS)
Yao, Qing-Lin
1995-02-01
When the stations for seismic disasters rescue in future or the similars are designed on a network of communication line, the general absolute center of a graph needs to be solved to reduce the requirements in the number of stations and running parameters and to establish an optimal station in a sense distribution of the rescue arrival time by the way of locating optimally the stations. The existing solution on this problem was proposed by Edward (1978) in which, however, there is serious deviation. In this article, the work of Edward (1978) is developed in both formula and figure, more correct solution is proposed and proved. Then the result from the newer solution is contrasted with that from the older one in a instance about locating optimally the station for seismic disasters rescue.
Overcoming Spatial and Temporal Barriers to Public Access Defibrillators Via Optimization.
Sun, Christopher L F; Demirtas, Derya; Brooks, Steven C; Morrison, Laurie J; Chan, Timothy C Y
2016-08-23
Immediate access to an automated external defibrillator (AED) increases the chance of survival for out-of-hospital cardiac arrest (OHCA). Current deployment usually considers spatial AED access, assuming AEDs are available 24 h a day. The goal of this study was to develop an optimization model for AED deployment, accounting for spatial and temporal accessibility, to evaluate if OHCA coverage would improve compared with deployment based on spatial accessibility alone. This study was a retrospective population-based cohort trial using data from the Toronto Regional RescuNET Epistry cardiac arrest database. We identified all nontraumatic public location OHCAs in Toronto, Ontario, Canada (January 2006 through August 2014) and obtained a list of registered AEDs (March 2015) from Toronto Paramedic Services. Coverage loss due to limited temporal access was quantified by comparing the number of OHCAs that occurred within 100 meters of a registered AED (assumed coverage 24 h per day, 7 days per week) with the number that occurred both within 100 meters of a registered AED and when the AED was available (actual coverage). A spatiotemporal optimization model was then developed that determined AED locations to maximize OHCA actual coverage and overcome the reported coverage loss. The coverage gain between the spatiotemporal model and a spatial-only model was computed by using 10-fold cross-validation. A total of 2,440 nontraumatic public OHCAs and 737 registered AED locations were identified. A total of 451 OHCAs were covered by registered AEDs under assumed coverage 24 h per day, 7 days per week, and 354 OHCAs under actual coverage, representing a coverage loss of 21.5% (p < 0.001). Using the spatiotemporal model to optimize AED deployment, a 25.3% relative increase in actual coverage was achieved compared with the spatial-only approach (p < 0.001). One in 5 OHCAs occurred near an inaccessible AED at the time of the OHCA. Potential AED use was significantly improved with a spatiotemporal optimization model guiding deployment. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Direct demodulation method for heavy atom position determination in protein crystallography
NASA Astrophysics Data System (ADS)
Zhou, Liang; Liu, Zhong-Chuan; Liu, Peng; Dong, Yu-Hui
2013-01-01
The first step of phasing in any de novo protein structure determination using isomorphous replacement (IR) or anomalous scattering (AD) experiments is to find heavy atom positions. Traditionally, heavy atom positions can be solved by inspecting the difference Patterson maps. Due to the weak signals in isomorphous or anomalous differences and the noisy background in the Patterson map, the search for heavy atoms may become difficult. Here, the direct demodulation (DD) method is applied to the difference Patterson maps to reduce the noisy backgrounds and sharpen the signal peaks. The real space Patterson search by using these optimized maps can locate the heavy atom positions more accurately. It is anticipated that the direct demodulation method can assist in heavy atom position determination and facilitate the de novo structure determination of proteins.
Tong, Yubing; Udupa, Jayaram K.; Torigian, Drew A.
2014-01-01
Purpose: The quantification of body fat plays an important role in the study of numerous diseases. It is common current practice to use the fat area at a single abdominal computed tomography (CT) slice as a marker of the body fat content in studying various disease processes. This paper sets out to answer three questions related to this issue which have not been addressed in the literature. At what single anatomic slice location do the areas of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) estimated from the slice correlate maximally with the corresponding fat volume measures? How does one ensure that the slices used for correlation calculation from different subjects are at the same anatomic location? Are there combinations of multiple slices (not necessarily contiguous) whose area sum correlates better with volume than does single slice area with volume? Methods: The authors propose a novel strategy for mapping slice locations to a standardized anatomic space so that same anatomic slice locations are identified in different subjects. The authors then study the volume-to-area correlations and determine where they become maximal. To address the third issue, the authors carry out similar correlation studies by utilizing two and three slices for calculating area sum. Results: Based on 50 abdominal CT data sets, the proposed mapping achieves significantly improved consistency of anatomic localization compared to current practice. Maximum correlations are achieved at different anatomic locations for SAT and VAT which are both different from the L4-L5 junction commonly utilized currently for single slice area estimation as a marker. Conclusions: The maximum area-to-volume correlation achieved is quite high, suggesting that it may be reasonable to estimate body fat by measuring the area of fat from a single anatomic slice at the site of maximum correlation and use this as a marker. The site of maximum correlation is not at L4-L5 as commonly assumed, but is more superiorly located at T12-L1 for SAT and at L3-L4 for VAT. Furthermore, the optimal anatomic locations for SAT and VAT estimation are not the same, contrary to common assumption. The proposed standardized space mapping achieves high consistency of anatomic localization by accurately managing nonlinearities in the relationships among landmarks. Multiple slices achieve greater improvement in correlation for VAT than for SAT. The optimal locations in the case of multiple slices are not contiguous. PMID:24877839
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, Yubing; Udupa, Jayaram K., E-mail: jay@mail.med.upenn.edu; Torigian, Drew A.
Purpose: The quantification of body fat plays an important role in the study of numerous diseases. It is common current practice to use the fat area at a single abdominal computed tomography (CT) slice as a marker of the body fat content in studying various disease processes. This paper sets out to answer three questions related to this issue which have not been addressed in the literature. At what single anatomic slice location do the areas of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) estimated from the slice correlate maximally with the corresponding fat volume measures? How doesmore » one ensure that the slices used for correlation calculation from different subjects are at the same anatomic location? Are there combinations of multiple slices (not necessarily contiguous) whose area sum correlates better with volume than does single slice area with volume? Methods: The authors propose a novel strategy for mapping slice locations to a standardized anatomic space so that same anatomic slice locations are identified in different subjects. The authors then study the volume-to-area correlations and determine where they become maximal. To address the third issue, the authors carry out similar correlation studies by utilizing two and three slices for calculating area sum. Results: Based on 50 abdominal CT data sets, the proposed mapping achieves significantly improved consistency of anatomic localization compared to current practice. Maximum correlations are achieved at different anatomic locations for SAT and VAT which are both different from the L4-L5 junction commonly utilized currently for single slice area estimation as a marker. Conclusions: The maximum area-to-volume correlation achieved is quite high, suggesting that it may be reasonable to estimate body fat by measuring the area of fat from a single anatomic slice at the site of maximum correlation and use this as a marker. The site of maximum correlation is not at L4-L5 as commonly assumed, but is more superiorly located at T12-L1 for SAT and at L3-L4 for VAT. Furthermore, the optimal anatomic locations for SAT and VAT estimation are not the same, contrary to common assumption. The proposed standardized space mapping achieves high consistency of anatomic localization by accurately managing nonlinearities in the relationships among landmarks. Multiple slices achieve greater improvement in correlation for VAT than for SAT. The optimal locations in the case of multiple slices are not contiguous.« less
Propagation distance-resolved characteristics of filament-induced copper plasma
Ghebregziabher, Isaac; Hartig, Kyle C.; Jovanovic, Igor
2016-03-02
Copper plasma generated at different filament-copper interaction points was characterized by spectroscopic, acoustic, and imaging measurements. The longitudinal variation of the filament intensity was qualitatively determined by acoustic measurements in air. The maximum plasma temperature was measured at the location of peak filament intensity, corresponding to the maximum mean electron energy during plasma formation. The highest copper plasma density was measured past the location of the maximum electron density in the filament, where spectral broadening of the filament leads to enhanced ionization. Acoustic measurements in air and on solid target were correlated to reconstructed plasma properties. Lastly, optimal line emissionmore » is measured near the geometric focus of the lens used to produce the filament.« less
NASA Astrophysics Data System (ADS)
Glazkov, S. A.; Gorbushin, A. R.; Osipova, S. L.; Semenov, A. V.
2016-10-01
The report describes the results of flow field experimental research in TsAGI T-128 transonic wind tunnel. During the tests Mach number, stagnation pressure, test section wall perforation ratio, angles between the test section panels and mixing chamber flaps varied. Based on the test results one determined corrections to the free-stream Mach number related to the flow speed difference in the model location and in the zone of static pressure measurement on the test section walls, nonuniformity of the longitudinal velocity component in the model location, optimal position of the movable test section elements to provide flow field uniformity in the test section and minimize the test leg drag.
Gutwald, Ralf; Jaeger, Raimund; Lambers, Floor M.
2017-01-01
Abstract The purpose of this paper was to analyze the biomechanical performance of customized mandibular reconstruction plates with optimized strength. The best locations for increasing bar widths were determined with a sensitivity analysis. Standard and customized plates were mounted on mandible models and mechanically tested. Maximum stress in the plate could be reduced from 573 to 393 MPa (−31%) by increasing bar widths. The median fatigue limit was significantly greater (p < 0.001) for customized plates (650 ± 27 N) than for standard plates (475 ± 27 N). Increasing bar widths at case-specific locations was an effective strategy for increasing plate fatigue performance. PMID:27887036
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yongxi
We propose an integrated modeling framework to optimally locate wireless charging facilities along a highway corridor to provide sufficient in-motion charging. The integrated model consists of a master, Infrastructure Planning Model that determines best locations with integrated two sub-models that explicitly capture energy consumption and charging and the interactions between electric vehicle and wireless charging technologies, geometrics of highway corridors, speed, and auxiliary system. The model is implemented in an illustrative case study of a highway corridor of Interstate 5 in Oregon. We found that the cost of establishing the charging lane is sensitive and increases with the speed tomore » achieve. Through sensitivity analyses, we gain better understanding on the extent of impacts of geometric characteristics of highways and battery capacity on the charging lane design.« less
Niki, Yuichiro; Ogawa, Mikako; Makiura, Rie; Magata, Yasuhiro; Kojima, Chie
2015-11-01
The detection of the sentinel lymph node (SLN), the first lymph node draining tumor cells, is important in cancer diagnosis and therapy. Dendrimers are synthetic macromolecules with highly controllable structures, and are potent multifunctional imaging agents. In this study, 12 types of dendrimer of different generations (G2, G4, G6, and G8) and different terminal groups (amino, carboxyl, and acetyl) were prepared to determine the optimal dendrimer structure for SLN imaging. Radiolabeled dendrimers were intradermally administrated to the right footpads of rats. All G2 dendrimers were predominantly accumulated in the kidney. Amino-terminal, acetyl-terminal, and carboxyl-terminal dendrimers of greater than G4 were mostly located at the injection site, in the blood, and in the SLN, respectively. The carboxyl-terminal dendrimers were largely unrecognized by macrophages and T-cells in the SLN. Finally, SLN detection was successfully performed by single photon emission computed tomography imaging using carboxyl-terminal dendrimers of greater than G4. The early detection of tumor cells in the sentinel draining lymph nodes (SLN) is of utmost importance in terms of determining cancer prognosis and devising treatment. In this article, the authors investigated various formulations of dendrimers to determine the optimal one for tumor detection. The data generated from this study would help clinicians to fight the cancer battle in the near future. Copyright © 2015 Elsevier Inc. All rights reserved.
Strain-Based Damage Determination Using Finite Element Analysis for Structural Health Management
NASA Technical Reports Server (NTRS)
Hochhalter, Jacob D.; Krishnamurthy, Thiagaraja; Aguilo, Miguel A.
2016-01-01
A damage determination method is presented that relies on in-service strain sensor measurements. The method employs a gradient-based optimization procedure combined with the finite element method for solution to the forward problem. It is demonstrated that strains, measured at a limited number of sensors, can be used to accurately determine the location, size, and orientation of damage. Numerical examples are presented to demonstrate the general procedure. This work is motivated by the need to provide structural health management systems with a real-time damage characterization. The damage cases investigated herein are characteristic of point-source damage, which can attain critical size during flight. The procedure described can be used to provide prognosis tools with the current damage configuration.
NASA Technical Reports Server (NTRS)
Stoll, John C.
1995-01-01
The performance of an unaided attitude determination system based on GPS interferometry is examined using linear covariance analysis. The modelled system includes four GPS antennae onboard a gravity gradient stabilized spacecraft, specifically the Air Force's RADCAL satellite. The principal error sources are identified and modelled. The optimal system's sensitivities to these error sources are examined through an error budget and by varying system parameters. The effects of two satellite selection algorithms, Geometric and Attitude Dilution of Precision (GDOP and ADOP, respectively) are examined. The attitude performance of two optimal-suboptimal filters is also presented. Based on this analysis, the limiting factors in attitude accuracy are the knowledge of the relative antenna locations, the electrical path lengths from the antennae to the receiver, and the multipath environment. The performance of the system is found to be fairly insensitive to torque errors, orbital inclination, and the two satellite geometry figures-of-merit tested.
An Optimal Algorithm towards Successive Location Privacy in Sensor Networks with Dynamic Programming
NASA Astrophysics Data System (ADS)
Zhao, Baokang; Wang, Dan; Shao, Zili; Cao, Jiannong; Chan, Keith C. C.; Su, Jinshu
In wireless sensor networks, preserving location privacy under successive inference attacks is extremely critical. Although this problem is NP-complete in general cases, we propose a dynamic programming based algorithm and prove it is optimal in special cases where the correlation only exists between p immediate adjacent observations.
NASA Astrophysics Data System (ADS)
Lagos, Soledad R.; Velis, Danilo R.
2018-02-01
We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.
Estimating the theoretical semivariogram from finite numbers of measurements
Zheng, Li; Silliman, Stephen E.
2000-01-01
We investigate from a theoretical basis the impacts of the number, location, and correlation among measurement points on the quality of an estimate of the semivariogram. The unbiased nature of the semivariogram estimator ŷ(r) is first established for a general random process Z(x). The variance of ŷZ(r) is then derived as a function of the sampling parameters (the number of measurements and their locations). In applying this function to the case of estimating the semivariograms of the transmissivity and the hydraulic head field, it is shown that the estimation error depends on the number of the data pairs, the correlation among the data pairs (which, in turn, are determined by the form of the underlying semivariogram γ(r)), the relative locations of the data pairs, and the separation distance at which the semivariogram is to be estimated. Thus design of an optimal sampling program for semivariogram estimation should include consideration of each of these factors. Further, the function derived for the variance of ŷZ(r) is useful in determining the reliability of a semivariogram developed from a previously established sampling design.
Bazuin, Doug; Martinez, Jessica; Harper, Kathy; Okland, Kathy; Bergquist, Patricia; Kumar, Shilpi
2015-01-01
The purpose of this study was to gain insight into the use and storage of supplies in the neonatal intensive care and women's health units of Parkland Hospital in Dallas, Texas. Construction of a new Parkland Hospital is underway, with completion of the 862-bed, 2.5-million square feet hospital in 2014. Leaders from the hospital and representatives from one of its major vendors collaborated on a research study to evaluate the hospital's current supply management system and develop criteria to create an improved system to be implemented at the new hospital. Approach includes qualitative and quantitative methods, that is, written survey, researcher observations, focus groups, and evaluation of hospital supply reports. Approaching the ideal location of supplies can be best approached by defining a nurse's activity at the point of care. Determining an optimal supply management system must be approached by understanding the "what" of caregivers' activities and then determining the "where" of the supplies that support those activities. An ideal supply management system locates supplies as close as possible to the point of use, is organized by activity, and is standardized within and across units. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Klein, E.; Masson, F.; Duputel, Z.; Yavasoglu, H.; Agram, P. S.
2016-12-01
Over the last two decades, the densification of GPS networks and the development of new radar satellites offered an unprecedented opportunity to study crustal deformation due to faulting. Yet, submarine strike slip fault segments remain a major issue, especially when the landscape appears unfavorable to the use of SAR measurements. It is the case of the North Anatolian fault segments located in the Main Marmara Sea, that remain unbroken ever since the Mw7.4 earthquake of Izmit in 1999, which ended a eastward migrating seismic sequence of Mw > 7 earthquakes. Located directly offshore Istanbul, evaluation of seismic hazard appears capital. But a strong controversy remains over whether these segments are accumulating strain and are likely to experience a major earthquake, or are creeping, resulting both from the simplicity of current geodetic models and the scarcity of geodetic data. We indeed show that 2D infinite fault models cannot account for the complexity of the Marmara fault segments. But current geodetic data in the western region of Istanbul are also insufficient to invert for the coupling using a 3D geometry of the fault. Therefore, we implement a global optimization procedure aiming at identifying the most favorable distribution of GPS stations to explore the strain accumulation. We present here the results of this procedure that allows to determine both the optimal number and location of the new stations. We show that a denser terrestrial survey network can indeed locally improve the resolution on the shallower part of the fault, even more efficiently with permanent stations. But data closer from the fault, only possible by submarine measurements, remain necessary to properly constrain the fault behavior and its potential along strike coupling variations.
Drought and Heat Wave Impacts on Electricity Grid Reliability in Illinois
NASA Astrophysics Data System (ADS)
Stillwell, A. S.; Lubega, W. N.
2016-12-01
A large proportion of thermal power plants in the United States use cooling systems that discharge large volumes of heated water into rivers and cooling ponds. To minimize thermal pollution from these discharges, restrictions are placed on temperatures at the edge of defined mixing zones in the receiving waters. However, during extended hydrological droughts and heat waves, power plants are often granted thermal variances permitting them to exceed these temperature restrictions. These thermal variances are often deemed necessary for maintaining electricity reliability, particularly as heat waves cause increased electricity demand. Current practice, however, lacks tools for the development of grid-scale operational policies specifying generator output levels that ensure reliable electricity supply while minimizing thermal variances. Such policies must take into consideration characteristics of individual power plants, topology and characteristics of the electricity grid, and locations of power plants within the river basin. In this work, we develop a methodology for the development of these operational policies that captures necessary factors. We develop optimal rules for different hydrological and meteorological conditions, serving as rule curves for thermal power plants. The rules are conditioned on leading modes of the ambient hydrological and meteorological conditions at the different power plant locations, as the locations are geographically close and hydrologically connected. Heat dissipation in the rivers and cooling ponds is modeled using the equilibrium temperature concept. Optimal rules are determined through a Monte Carlo sampling optimization framework. The methodology is applied to a case study of eight power plants in Illinois that were granted thermal variances in the summer of 2012, with a representative electricity grid model used in place of the actual electricity grid.
NASA Astrophysics Data System (ADS)
Chen, Wei-Guo; Wan, Xia; Wang, You-Kai
2018-05-01
A top quark mass measurement scheme near the {{t}}\\bar{{{t}}} production threshold in future {{{e}}}+{{{e}}}- colliders, e.g. the Circular Electron Positron Collider (CEPC), is simulated. A {χ }2 fitting method is adopted to determine the number of energy points to be taken and their locations. Our results show that the optimal energy point is located near the largest slope of the cross section v. beam energy plot, and the most efficient scheme is to concentrate all luminosity on this single energy point in the case of one-parameter top mass fitting. This suggests that the so-called data-driven method could be the best choice for future real experimental measurements. Conveniently, the top mass statistical uncertainty can also be calculated directly by the error matrix even without any sampling and fitting. The agreement of the above two optimization methods has been checked. Our conclusion is that by taking 50 fb‑1 total effective integrated luminosity data, the statistical uncertainty of the top potential subtracted mass can be suppressed to about 7 MeV and the total uncertainty is about 30 MeV. This precision will help to identify the stability of the electroweak vacuum at the Planck scale. Supported by National Science Foundation of China (11405102) and the Fundamental Research Funds for the Central Universities of China (GK201603027, GK201803019)
Optimization of an acoustic telemetry array for detecting transmitter-implanted fish
Clements, S.; Jepsen, D.; Karnowski, M.; Schreck, C.B.
2005-01-01
The development of miniature acoustic transmitters and economical, robust automated receivers has enabled researchers to study the movement patterns and survival of teleosts in estuarine and ocean environments, including many species and age-classes that were previously considered too small for implantation. During 2001-2003, we optimized a receiver mooring system to minimize gear and data loss in areas where current action or wave action and acoustic noise are high. In addition, we conducted extensive tests to determine (1) the performance of a transmitter and receiver (Vemco, Ltd.) that are widely used, particularly in North America and Europe and (2) the optimal placement of receivers for recording the passage of fish past a point in a linear-flow environment. Our results suggest that in most locations the mooring system performs well with little loss of data; however, boat traffic remains a concern due to entanglement with the mooring system. We also found that the reception efficiency of the receivers depends largely on the method and location of deployment. In many cases, we observed a range of 0-100% reception efficiency (the percentage of known transmissions that are detected while the receiver is within range of the transmitter) when using a conventional method of mooring. The efficiency was improved by removal of the mounting bar and obstructions from the mooring line. ?? Copyright by the American Fisheries Society 2005.
Experimental Evaluation of UWB Indoor Positioning for Sport Postures
Defraye, Jense; Steendam, Heidi; Gerlo, Joeri; De Clercq, Dirk; De Poorter, Eli
2018-01-01
Radio frequency (RF)-based indoor positioning systems (IPSs) use wireless technologies (including Wi-Fi, Zigbee, Bluetooth, and ultra-wide band (UWB)) to estimate the location of persons in areas where no Global Positioning System (GPS) reception is available, for example in indoor stadiums or sports halls. Of the above-mentioned forms of radio frequency (RF) technology, UWB is considered one of the most accurate approaches because it can provide positioning estimates with centimeter-level accuracy. However, it is not yet known whether UWB can also offer such accurate position estimates during strenuous dynamic activities in which moves are characterized by fast changes in direction and velocity. To answer this question, this paper investigates the capabilities of UWB indoor localization systems for tracking athletes during their complex (and most of the time unpredictable) movements. To this end, we analyze the impact of on-body tag placement locations and human movement patterns on localization accuracy and communication reliability. Moreover, two localization algorithms (particle filter and Kalman filter) with different optimizations (bias removal, non-line-of-sight (NLoS) detection, and path determination) are implemented. It is shown that although the optimal choice of optimization depends on the type of movement patterns, some of the improvements can reduce the localization error by up to 31%. Overall, depending on the selected optimization and on-body tag placement, our algorithms show good results in terms of positioning accuracy, with average errors in position estimates of 20 cm. This makes UWB a suitable approach for tracking dynamic athletic activities. PMID:29315267
Designing optimal greenhouse gas observing networks that consider performance and cost
Lucas, D. D.; Yver Kwok, C.; Cameron-Smith, P.; ...
2015-06-16
Emission rates of greenhouse gases (GHGs) entering into the atmosphere can be inferred using mathematical inverse approaches that combine observations from a network of stations with forward atmospheric transport models. Some locations for collecting observations are better than others for constraining GHG emissions through the inversion, but the best locations for the inversion may be inaccessible or limited by economic and other non-scientific factors. We present a method to design an optimal GHG observing network in the presence of multiple objectives that may be in conflict with each other. As a demonstration, we use our method to design a prototypemore » network of six stations to monitor summertime emissions in California of the potent GHG 1,1,1,2-tetrafluoroethane (CH 2FCF 3, HFC-134a). We use a multiobjective genetic algorithm to evolve network configurations that seek to jointly maximize the scientific accuracy of the inferred HFC-134a emissions and minimize the associated costs of making the measurements. The genetic algorithm effectively determines a set of "optimal" observing networks for HFC-134a that satisfy both objectives (i.e., the Pareto frontier). The Pareto frontier is convex, and clearly shows the tradeoffs between performance and cost, and the diminishing returns in trading one for the other. Without difficulty, our method can be extended to design optimal networks to monitor two or more GHGs with different emissions patterns, or to incorporate other objectives and constraints that are important in the practical design of atmospheric monitoring networks.« less
Optimal Observations for Variational Data Assimilation
NASA Technical Reports Server (NTRS)
Koehl, Armin; Stammer, Detlef
2003-01-01
An important aspect of Ocean state estimation is the design of an observing system that allows the efficient study of climate aspects in the ocean. A solution of the design problem is presented here in terms of optimal observations that emerge as nondimensionalized singular vectors of the modified data resolution matrix. The actual computation is feasible only for scalar quantities in the limit of large observational errors. In the framework of a lo resolution North Atlantic primitive equation model it is demonstrated that such optimal observations when applied to determining the strength of the volume and heat transport across the Greenland-Scotland ridge, perform significantly better than traditional section data. On seasonal to inter-annual time-scales optimal observations are located primarily along the continental shelf and information about heat-transport, wind stress and stratification is being communicated via boundary waves and advective processes. On time-scales of about a month, sea surface height observations appear to be more efficient in reconstructing the cross-ridge heat transport than hydrographic observations. Optimal observations also provide a tool for understanding how the ocean state is effected by anomalies of integral quantities such as meridional heat transport.
Evaluating the effects of real power losses in optimal power flow based storage integration
Castillo, Anya; Gayme, Dennice
2017-03-27
This study proposes a DC optimal power flow (DCOPF) with losses formulation (the `-DCOPF+S problem) and uses it to investigate the role of real power losses in OPF based grid-scale storage integration. We derive the `- DCOPF+S problem by augmenting a standard DCOPF with storage (DCOPF+S) problem to include quadratic real power loss approximations. This procedure leads to a multi-period nonconvex quadratically constrained quadratic program, which we prove can be solved to optimality using either a semidefinite or second order cone relaxation. Our approach has some important benefits over existing models. It is more computationally tractable than ACOPF with storagemore » (ACOPF+S) formulations and the provably exact convex relaxations guarantee that an optimal solution can be attained for a feasible problem. Adding loss approximations to a DCOPF+S model leads to a more accurate representation of locational marginal prices, which have been shown to be critical to determining optimal storage dispatch and siting in prior ACOPF+S based studies. Case studies demonstrate the improved accuracy of the `-DCOPF+S model over a DCOPF+S model and the computational advantages over an ACOPF+S formulation.« less
Dipole and quadrupole synthesis of electric potential fields. M.S. Thesis
NASA Technical Reports Server (NTRS)
Tilley, D. G.
1979-01-01
A general technique for expanding an unknown potential field in terms of a linear summation of weighted dipole or quadrupole fields is described. Computational methods were developed for the iterative addition of dipole fields. Various solution potentials were compared inside the boundary with a more precise calculation of the potential to derive optimal schemes for locating the singularities of the dipole fields. Then, the problem of determining solutions to Laplace's equation on an unbounded domain as constrained by pertinent electron trajectory data was considered.
Optimizing the Domestic Chemical, Biological, Radiological, and Nuclear Response Enterprise
2015-03-01
scope of this study was an issue with recall times, the time it takes to assemble the unit at its home location. A total of 13 of the 17 CERFPs surveyed...in the study were not 22 conducting exercises to determine how long a no-notice recall of their forces would ac- tually take, mainly because they...felt such experiences would create tensions between employers and NG members and would adversely affect the unit. Without rehearsing this key component
1984-05-01
exceed one manyear. 5. The new scheduling system will be more responsive to the dynanic forces that affect the use of surgical resources. a. Elective...will be removed when the OR is relocated to the new addition (see Figure 3 for floor design of future OR location). The OR Scheduling System The days of...obtaining new appointment openings. This would insure that the names on the waiting list are rotating regularly. Identified Problems With The Current
Epileptogenicity and pathology - Under consideration of ablative approaches.
Stefan, H; Schmitt, F C
2018-05-01
Besides resective epilepsy surgery, minimally invasive ablation using new diagnostic and therapeutic techniques recently became available. Optimal diagnostic approaches for these treatment options are discussed. The pathophysiology of epileptogenic networks differs depending on the lesion-types and location, requiring a differential use of non-invasive or invasive functional studies. In addition to the definition of epileptogenic zones, a challenge for pre-surgical investigation is the determination of three-dimensional epileptic networks to be removed. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi
2016-10-01
One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.
Risk assessment in man and mouse.
Balci, Fuat; Freestone, David; Gallistel, Charles R
2009-02-17
Human and mouse subjects tried to anticipate at which of 2 locations a reward would appear. On a randomly scheduled fraction of the trials, it appeared with a short latency at one location; on the complementary fraction, it appeared after a longer latency at the other location. Subjects of both species accurately assessed the exogenous uncertainty (the probability of a short versus a long trial) and the endogenous uncertainty (from the scalar variability in their estimates of an elapsed duration) to compute the optimal target latency for a switch from the short- to the long-latency location. The optimal latency was arrived at so rapidly that there was no reliably discernible improvement over trials. Under these nonverbal conditions, humans and mice accurately assess risks and behave nearly optimally. That this capacity is well-developed in the mouse opens up the possibility of a genetic approach to the neurobiological mechanisms underlying risk assessment.
Risk assessment in man and mouse
Balci, Fuat; Freestone, David; Gallistel, Charles R.
2009-01-01
Human and mouse subjects tried to anticipate at which of 2 locations a reward would appear. On a randomly scheduled fraction of the trials, it appeared with a short latency at one location; on the complementary fraction, it appeared after a longer latency at the other location. Subjects of both species accurately assessed the exogenous uncertainty (the probability of a short versus a long trial) and the endogenous uncertainty (from the scalar variability in their estimates of an elapsed duration) to compute the optimal target latency for a switch from the short- to the long-latency location. The optimal latency was arrived at so rapidly that there was no reliably discernible improvement over trials. Under these nonverbal conditions, humans and mice accurately assess risks and behave nearly optimally. That this capacity is well-developed in the mouse opens up the possibility of a genetic approach to the neurobiological mechanisms underlying risk assessment. PMID:19188592
Optimization and resilience of complex supply-demand networks
NASA Astrophysics Data System (ADS)
Zhang, Si-Ping; Huang, Zi-Gang; Dong, Jia-Qi; Eisenberg, Daniel; Seager, Thomas P.; Lai, Ying-Cheng
2015-06-01
Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low efficiency, resource scarcity, and partial system failures. Under certain conditions global catastrophe on the scale of the whole system can occur through the dynamical process of cascading failures. We investigate optimization and resilience of time-varying supply-demand systems by constructing network models of such systems, where resources are transported from the supplier sites to users through various links. Here by optimization we mean minimization of the maximum load on links, and system resilience can be characterized using the cascading failure size of users who fail to connect with suppliers. We consider two representative classes of supply schemes: load driven supply and fix fraction supply. Our findings are: (1) optimized systems are more robust since relatively smaller cascading failures occur when triggered by external perturbation to the links; (2) a large fraction of links can be free of load if resources are directed to transport through the shortest paths; (3) redundant links in the performance of the system can help to reroute the traffic but may undesirably transmit and enlarge the failure size of the system; (4) the patterns of cascading failures depend strongly upon the capacity of links; (5) the specific location of the trigger determines the specific route of cascading failure, but has little effect on the final cascading size; (6) system expansion typically reduces the efficiency; and (7) when the locations of the suppliers are optimized over a long expanding period, fewer suppliers are required. These results hold for heterogeneous networks in general, providing insights into designing optimal and resilient complex supply-demand systems that expand constantly in time.
Optimal distribution of borehole geophones for monitoring CO2-injection-induced seismicity
NASA Astrophysics Data System (ADS)
Huang, L.; Chen, T.; Foxall, W.; Wagoner, J. L.
2016-12-01
The U.S. DOE initiative, National Risk Assessment Partnership (NRAP), aims to develop quantitative risk assessment methodologies for carbon capture, utilization and storage (CCUS). As part of tasks of the Strategic Monitoring Group of NRAP, we develop a tool for optimal design of a borehole geophones distribution for monitoring CO2-injection-induced seismicity. The tool consists of a number of steps, including building a geophysical model for a given CO2 injection site, defining target monitoring regions within CO2-injection/migration zones, generating synthetic seismic data, giving acceptable uncertainties in input data, and determining the optimal distribution of borehole geophones. We use a synthetic geophysical model as an example to demonstrate the capability our new tool to design an optimal/cost-effective passive seismic monitoring network using borehole geophones. The model is built based on the geologic features found at the Kimberlina CCUS pilot site located in southern San Joaquin Valley, California. This tool can provide CCUS operators with a guideline for cost-effective microseismic monitoring of geologic carbon storage and utilization.
Optimal Placement of Dynamic Var Sources by Using Empirical Controllability Covariance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Huang, Weihong; Sun, Kai
In this paper, the empirical controllability covariance (ECC), which is calculated around the considered operating condition of a power system, is applied to quantify the degree of controllability of system voltages under specific dynamic var source locations. An optimal dynamic var source placement method addressing fault-induced delayed voltage recovery (FIDVR) issues is further formulated as an optimization problem that maximizes the determinant of ECC. The optimization problem is effectively solved by the NOMAD solver, which implements the mesh adaptive direct search algorithm. The proposed method is tested on an NPCC 140-bus system and the results show that the proposed methodmore » with fault specified ECC can solve the FIDVR issue caused by the most severe contingency with fewer dynamic var sources than the voltage sensitivity index (VSI)-based method. The proposed method with fault unspecified ECC does not depend on the settings of the contingency and can address more FIDVR issues than the VSI method when placing the same number of SVCs under different fault durations. It is also shown that the proposed method can help mitigate voltage collapse.« less
Multi-Objective Design Of Optimal Greenhouse Gas Observation Networks
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Bergmann, D. J.; Cameron-Smith, P. J.; Gard, E.; Guilderson, T. P.; Rotman, D.; Stolaroff, J. K.
2010-12-01
One of the primary scientific functions of a Greenhouse Gas Information System (GHGIS) is to infer GHG source emission rates and their uncertainties by combining measurements from an observational network with atmospheric transport modeling. Certain features of the observational networks that serve as inputs to a GHGIS --for example, sampling location and frequency-- can greatly impact the accuracy of the retrieved GHG emissions. Observation System Simulation Experiments (OSSEs) provide a framework to characterize emission uncertainties associated with a given network configuration. By minimizing these uncertainties, OSSEs can be used to determine optimal sampling strategies. Designing a real-world GHGIS observing network, however, will involve multiple, conflicting objectives; there will be trade-offs between sampling density, coverage and measurement costs. To address these issues, we have added multi-objective optimization capabilities to OSSEs. We demonstrate these capabilities by quantifying the trade-offs between retrieval error and measurement costs for a prototype GHGIS, and deriving GHG observing networks that are Pareto optimal. [LLNL-ABS-452333: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Lagrangian Approach to Jet Mixing and Optimization of the Reactor for Production of Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Povitsky, Alex; Salas, Manuel D.
2001-01-01
This study was motivated by an attempt to optimize the High Pressure carbon oxide (HiPco) process for the production of carbon nanotubes from gaseous carbon oxide, The goal is to achieve rapid and uniform heating of catalyst particles by an optimal arrangement of jets. A mixed Eulerian and Lagrangian approach is implemented to track the temperature of catalyst particles along their trajectories as a function of time. The FLUENT CFD software with second-order upwind approximation of convective terms and an algebraic multigrid-based solver is used. The poor performance of the original reactor configuration is explained in terms of features of particle trajectories. The trajectories most exposed to the hot jets appear to be the most problematic for heating because they either bend towards the cold jet interior or rotate upwind of the mixing zone. To reduce undesirable slow and/or oscillatory heating of catalyst particles, a reactor configuration with three central jets is proposed and the optimal location of the central and peripheral nozzles is determined.
Bot, Maarten; Schuurman, P Richard; Odekerken, Vincent J J; Verhagen, Rens; Contarino, Fiorella Maria; De Bie, Rob M A; van den Munckhof, Pepijn
2018-05-01
Individual motor improvement after deep brain stimulation (DBS) of the subthalamic nucleus (STN) for Parkinson's disease (PD) varies considerably. Stereotactic targeting of the dorsolateral sensorimotor part of the STN is considered paramount for maximising effectiveness, but studies employing the midcommissural point (MCP) as anatomical reference failed to show correlation between DBS location and motor improvement. The medial border of the STN as reference may provide better insight in the relationship between DBS location and clinical outcome. Motor improvement after 12 months of 65 STN DBS electrodes was categorised into non-responding, responding and optimally responding body-sides. Stereotactic coordinates of optimal electrode contacts relative to both medial STN border and MCP served to define theoretic DBS 'hotspots'. Using the medial STN border as reference, significant negative correlation (Pearson's correlation -0.52, P<0.01) was found between the Euclidean distance from the centre of stimulation to this DBS hotspot and motor improvement. This hotspot was located at 2.8 mm lateral, 1.7 mm anterior and 2.5 mm superior relative to the medial STN border. Using MCP as reference, no correlation was found. The medial STN border proved superior compared with MCP as anatomical reference for correlation of DBS location and motor improvement, and enabled defining an optimal DBS location within the nucleus. We therefore propose the medial STN border as a better individual reference point than the currently used MCP on preoperative stereotactic imaging, in order to obtain optimal and thus less variable motor improvement for individual patients with PD following STN DBS. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Disease prevention versus data privacy: using landcover maps to inform spatial epidemic models.
Tildesley, Michael J; Ryan, Sadie J
2012-01-01
The availability of epidemiological data in the early stages of an outbreak of an infectious disease is vital for modelers to make accurate predictions regarding the likely spread of disease and preferred intervention strategies. However, in some countries, the necessary demographic data are only available at an aggregate scale. We investigated the ability of models of livestock infectious diseases to predict epidemic spread and obtain optimal control policies in the event of imperfect, aggregated data. Taking a geographic information approach, we used land cover data to predict UK farm locations and investigated the influence of using these synthetic location data sets upon epidemiological predictions in the event of an outbreak of foot-and-mouth disease. When broadly classified land cover data were used to create synthetic farm locations, model predictions deviated significantly from those simulated on true data. However, when more resolved subclass land use data were used, moderate to highly accurate predictions of epidemic size, duration and optimal vaccination and ring culling strategies were obtained. This suggests that a geographic information approach may be useful where individual farm-level data are not available, to allow predictive analyses to be carried out regarding the likely spread of disease. This method can also be used for contingency planning in collaboration with policy makers to determine preferred control strategies in the event of a future outbreak of infectious disease in livestock.
Disease Prevention versus Data Privacy: Using Landcover Maps to Inform Spatial Epidemic Models
Tildesley, Michael J.; Ryan, Sadie J.
2012-01-01
The availability of epidemiological data in the early stages of an outbreak of an infectious disease is vital for modelers to make accurate predictions regarding the likely spread of disease and preferred intervention strategies. However, in some countries, the necessary demographic data are only available at an aggregate scale. We investigated the ability of models of livestock infectious diseases to predict epidemic spread and obtain optimal control policies in the event of imperfect, aggregated data. Taking a geographic information approach, we used land cover data to predict UK farm locations and investigated the influence of using these synthetic location data sets upon epidemiological predictions in the event of an outbreak of foot-and-mouth disease. When broadly classified land cover data were used to create synthetic farm locations, model predictions deviated significantly from those simulated on true data. However, when more resolved subclass land use data were used, moderate to highly accurate predictions of epidemic size, duration and optimal vaccination and ring culling strategies were obtained. This suggests that a geographic information approach may be useful where individual farm-level data are not available, to allow predictive analyses to be carried out regarding the likely spread of disease. This method can also be used for contingency planning in collaboration with policy makers to determine preferred control strategies in the event of a future outbreak of infectious disease in livestock. PMID:23133352
Optimizing the Entrainment Geometry of a Dry Powder Inhaler: Methodology and Preliminary Results.
Kopsch, Thomas; Murnane, Darragh; Symons, Digby
2016-11-01
For passive dry powder inhalers (DPIs) entrainment and emission of the aerosolized drug dose depends strongly on device geometry and the patient's inhalation manoeuvre. We propose a computational method for optimizing the entrainment part of a DPI. The approach assumes that the pulmonary delivery location of aerosol can be determined by the timing of dose emission into the tidal airstream. An optimization algorithm was used to iteratively perform computational fluid dynamic (CFD) simulations of the drug emission of a DPI. The algorithm seeks to improve performance by changing the device geometry. Objectives were to achieve drug emission that was: A) independent of inhalation manoeuvre; B) similar to a target profile. The simulations used complete inhalation flow-rate profiles generated dependent on the device resistance. The CFD solver was OpenFOAM with drug/air flow simulated by the Eulerian-Eulerian method. To demonstrate the method, a 2D geometry was optimized for inhalation independence (comparing two breath profiles) and an early-bolus delivery. Entrainment was both shear-driven and gas-assisted. Optimization for a delay in the bolus delivery was not possible with the chosen geometry. Computational optimization of a DPI geometry for most similar drug delivery has been accomplished for an example entrainment geometry.
Intelligent design optimization of a shape-memory-alloy-actuated reconfigurable wing
NASA Astrophysics Data System (ADS)
Lagoudas, Dimitris C.; Strelec, Justin K.; Yen, John; Khan, Mohammad A.
2000-06-01
The unique thermal and mechanical properties offered by shape memory alloys (SMAs) present exciting possibilities in the field of aerospace engineering. When properly trained, SMA wires act as linear actuators by contracting when heated and returning to their original shape when cooled. It has been shown experimentally that the overall shape of an airfoil can be altered by activating several attached SMA wire actuators. This shape-change can effectively increase the efficiency of a wing in flight at several different flow regimes. To determine the necessary placement of these wire actuators within the wing, an optimization method that incorporates a fully-coupled structural, thermal, and aerodynamic analysis has been utilized. Due to the complexity of the fully-coupled analysis, intelligent optimization methods such as genetic algorithms have been used to efficiently converge to an optimal solution. The genetic algorithm used in this case is a hybrid version with global search and optimization capabilities augmented by the simplex method as a local search technique. For the reconfigurable wing, each chromosome represents a realizable airfoil configuration and its genes are the SMA actuators, described by their location and maximum transformation strain. The genetic algorithm has been used to optimize this design problem to maximize the lift-to-drag ratio for a reconfigured airfoil shape.
Access to specialist care: Optimizing the geographic configuration of trauma systems.
Jansen, Jan O; Morrison, Jonathan J; Wang, Handing; He, Shan; Lawrenson, Robin; Hutchison, James D; Campbell, Marion K
2015-11-01
The optimal geographic configuration of health care systems is key to maximizing accessibility while promoting the efficient use of resources. This article reports the use of a novel approach to inform the optimal configuration of a national trauma system. This is a prospective cohort study of all trauma patients, 15 years and older, attended to by the Scottish Ambulance Service, between July 1, 2013, and June 30, 2014. Patients underwent notional triage to one of three levels of care (major trauma center [MTC], trauma unit, or local emergency hospital). We used geographic information systems software to calculate access times, by road and air, from all incident locations to all candidate hospitals. We then modeled the performance of all mathematically possible network configurations and used multiobjective optimization to determine geospatially optimized configurations. A total of 80,391 casualties were included. A network with only high- or moderate-volume MTCs (admitting at least 650 or 400 severely injured patients per year, respectively) would be optimally configured with a single MTC. A network accepting lower-volume MTCs (at least 240 severely injured patients per year) would be optimally configured with two MTCs. Both configurations would necessitate an increase in the number of helicopter retrievals. This study has shown that a novel combination of notional triage, network analysis, and mathematical optimization can be used to inform the planning of a national clinical network. Scotland's trauma system could be optimized with one or two MTCs. Care management study, level IV.
Park, Eun-Ah; Lee, Whal; Chung, Se-Young; Yin, Yong Hu; Chung, Jin Wook; Park, Jae Hyung
2010-01-01
To determine the optimal scan timing and adequate intravenous route for patients having undergone the Fontan operation. A total of 88 computed tomographic images in 49 consecutive patients who underwent the Fontan operation were retrospectively evaluated and divided into 7 groups: group 1, bolus-tracking method with either intravenous route (n = 20); group 2, 1-minute-delay scan with single antecubital route (n = 36); group 3, 1-minute-delay scan with both antecubital routes (n = 2); group 4, 1-minute-delay scan with foot vein route (n = 3); group 5, 1-minute-delay scan with simultaneous infusion via both antecubital and foot vein routes (n = 2); group 6, 3-minute-delay scan with single antecubital route (n = 22); and group 7, 3-minute-delay scan with foot vein route (n = 3). The presence of beam-hardening artifact, uniform enhancement, and optimal enhancement was evaluated at the right pulmonary artery (RPA), left pulmonary artery (LPA), and Fontan tract. Optimal enhancement was determined when evaluation of thrombus was possible. Standard deviation was measured at the RPA, LPA, and Fontan tract. Beam-hardening artifacts of the RPA, LPA, and Fontan tract were frequently present in groups 1, 4, and 5. The success rate of uniform and optimal enhancement was highest (100%) in groups 6 and 7, followed by group 2 (75%). An SD of less than 30 Hounsfield unit for the pulmonary artery and Fontan tract was found in groups 3, 6, and 7. The optimal enhancement of the pulmonary arteries and Fontan tract can be achieved by a 3-minute-delay scan irrespective of the intravenous route location.
Ahn, Yongjun; Yeo, Hwasoo
2015-01-01
The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles.
Ugarte, Juan P; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John
2014-01-01
There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping.
Goldman, A. J.
2006-01-01
Dr. Christoph Witzgall, the honoree of this Symposium, can count among his many contributions to applied mathematics and mathematical operations research a body of widely-recognized work on the optimal location of facilities. The present paper offers to non-specialists a sketch of that field and its evolution, with emphasis on areas most closely related to Witzgall’s research at NBS/NIST. PMID:27274920
NASA Astrophysics Data System (ADS)
Dar, Zamiyad
The prices in the electricity market change every five minutes. The prices in peak demand hours can be four or five times more than the prices in normal off peak hours. Renewable energy such as wind power has zero marginal cost and a large percentage of wind energy in a power grid can reduce the price significantly. The variability of wind power prevents it from being constantly available in peak hours. The price differentials between off-peak and on-peak hours due to wind power variations provide an opportunity for a storage device owner to buy energy at a low price and sell it in high price hours. In a large and complex power grid, there are many locations for installation of a storage device. Storage device owners prefer to install their device at locations that allow them to maximize profit. Market participants do not possess much information about the system operator's dispatch, power grid, competing generators and transmission system. The publicly available data from the system operator usually consists of Locational Marginal Prices (LMP), load, reserve prices and regulation prices. In this thesis, we develop a method to find the optimum location of a storage device without using the grid, transmission or generator data. We formulate and solve an optimization problem to find the most profitable location for a storage device using only the publicly available market pricing data such as LMPs, and reserve prices. We consider constraints arising due to storage device operation limitations in our objective function. We use binary optimization and branch and bound method to optimize the operation of a storage device at a given location to earn maximum profit. We use two different versions of our method and optimize the profitability of a storage unit at each location in a 36 bus model of north eastern United States and south eastern Canada for four representative days representing four seasons in a year. Finally, we compare our results from the two versions of our method with a multi period stochastically optimized economic dispatch of the same power system with storage device at locations proposed by our method. We observe a small gap in profit values arising due to the effect of storage device on market prices. However, we observe that the ranking of different locations in terms of profitability remains almost unchanged. This leads us to conclude that our method can successfully predict the optimum locations for installation of storage units in a complex grid using only the publicly available electricity market data.
Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.
Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang
2016-11-01
Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.
Correlative Stochastic Optical Reconstruction Microscopy and Electron Microscopy
Kim, Doory; Deerinck, Thomas J.; Sigal, Yaron M.; Babcock, Hazen P.; Ellisman, Mark H.; Zhuang, Xiaowei
2015-01-01
Correlative fluorescence light microscopy and electron microscopy allows the imaging of spatial distributions of specific biomolecules in the context of cellular ultrastructure. Recent development of super-resolution fluorescence microscopy allows the location of molecules to be determined with nanometer-scale spatial resolution. However, correlative super-resolution fluorescence microscopy and electron microscopy (EM) still remains challenging because the optimal specimen preparation and imaging conditions for super-resolution fluorescence microscopy and EM are often not compatible. Here, we have developed several experiment protocols for correlative stochastic optical reconstruction microscopy (STORM) and EM methods, both for un-embedded samples by applying EM-specific sample preparations after STORM imaging and for embedded and sectioned samples by optimizing the fluorescence under EM fixation, staining and embedding conditions. We demonstrated these methods using a variety of cellular targets. PMID:25874453
Seismic low-frequency-based calculation of reservoir fluid mobility and its applications
NASA Astrophysics Data System (ADS)
Chen, Xue-Hua; He, Zhen-Hua; Zhu, Si-Xin; Liu, Wei; Zhong, Wen-Li
2012-06-01
Low frequency content of seismic signals contains information related to the reservoir fluid mobility. Based on the asymptotic analysis theory of frequency-dependent reflectivity from a fluid-saturated poroelastic medium, we derive the computational implementation of reservoir fluid mobility and present the determination of optimal frequency in the implementation. We then calculate the reservoir fluid mobility using the optimal frequency instantaneous spectra at the low-frequency end of the seismic spectrum. The methodology is applied to synthetic seismic data from a permeable gas-bearing reservoir model and real land and marine seismic data. The results demonstrate that the fluid mobility shows excellent quality in imaging the gas reservoirs. It is feasible to detect the location and spatial distribution of gas reservoirs and reduce the non-uniqueness and uncertainty in fluid identification.
LTCC Thick Film Process Characterization
Girardi, M. A.; Peterson, K. A.; Vianco, P. T.
2016-05-01
Low temperature cofired ceramic (LTCC) technology has proven itself in military/space electronics, wireless communication, microsystems, medical and automotive electronics, and sensors. The use of LTCC for high frequency applications is appealing due to its low losses, design flexibility and packaging and integration capability. Moreover, we summarize the LTCC thick film process including some unconventional process steps such as feature machining in the unfired state and thin film definition of outer layer conductors. The LTCC thick film process was characterized to optimize process yields by focusing on these factors: 1) Print location, 2) Print thickness, 3) Drying of tapes and panels,more » 4) Shrinkage upon firing, and 5) Via topography. Statistical methods were used to analyze critical process and product characteristics in the determination towards that optimization goal.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hellfeld, Daniel; Barton, Paul; Gunter, Donald
Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less
Modeling Road Vulnerability to Snow Using Mixed Integer Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, Tony K; Omitaomu, Olufemi A; Ostrowski, James A
As the number and severity of snowfall events continue to grow, the need to intelligently direct road maintenance during these snowfall events will also grow. In several locations, local governments lack the resources to completely treat all roadways during snow events. Furthermore, some governments utilize only traffic data to determine which roads should be treated. As a result, many schools, businesses, and government offices must be unnecessarily closed, which directly impacts the social, educational, and economic well-being of citizens and institutions. In this work, we propose a mixed integer programming formulation to optimally allocate resources to manage snowfall on roadsmore » using meteorological, geographical, and environmental parameters. Additionally, we evaluate the impacts of an increase in budget for winter road maintenance on snow control resources.« less
Soley, Micheline B; Markmann, Andreas; Batista, Victor S
2018-06-12
We introduce the so-called "Classical Optimal Control Optimization" (COCO) method for global energy minimization based on the implementation of the diffeomorphic modulation under observable-response-preserving homotopy (DMORPH) gradient algorithm. A probe particle with time-dependent mass m( t;β) and dipole μ( r, t;β) is evolved classically on the potential energy surface V( r) coupled to an electric field E( t;β), as described by the time-dependent density of states represented on a grid, or otherwise as a linear combination of Gaussians generated by the k-means clustering algorithm. Control parameters β defining m( t;β), μ( r, t;β), and E( t;β) are optimized by following the gradients of the energy with respect to β, adapting them to steer the particle toward the global minimum energy configuration. We find that the resulting COCO algorithm is capable of resolving near-degenerate states separated by large energy barriers and successfully locates the global minima of golf potentials on flat and rugged surfaces, previously explored for testing quantum annealing methodologies and the quantum optimal control optimization (QuOCO) method. Preliminary results show successful energy minimization of multidimensional Lennard-Jones clusters. Beyond the analysis of energy minimization in the specific model systems investigated, we anticipate COCO should be valuable for solving minimization problems in general, including optimization of parameters in applications to machine learning and molecular structure determination.
NASA Astrophysics Data System (ADS)
Yang, Peng; Peng, Yongfei; Ye, Bin; Miao, Lixin
2017-09-01
This article explores the integrated optimization problem of location assignment and sequencing in multi-shuttle automated storage/retrieval systems under the modified 2n-command cycle pattern. The decision of storage and retrieval (S/R) location assignment and S/R request sequencing are jointly considered. An integer quadratic programming model is formulated to describe this integrated optimization problem. The optimal travel cycles for multi-shuttle S/R machines can be obtained to process S/R requests in the storage and retrieval request order lists by solving the model. The small-sized instances are optimally solved using CPLEX. For large-sized problems, two tabu search algorithms are proposed, in which the first come, first served and nearest neighbour are used to generate initial solutions. Various numerical experiments are conducted to examine the heuristics' performance and the sensitivity of algorithm parameters. Furthermore, the experimental results are analysed from the viewpoint of practical application, and a parameter list for applying the proposed heuristics is recommended under different real-life scenarios.
Optimization of the Hartmann-Shack microlens array
NASA Astrophysics Data System (ADS)
de Oliveira, Otávio Gomes; de Lima Monteiro, Davies William
2011-04-01
In this work we propose to optimize the microlens-array geometry for a Hartmann-Shack wavefront sensor. The optimization makes possible that regular microlens arrays with a larger number of microlenses are replaced by arrays with fewer microlenses located at optimal sampling positions, with no increase in the reconstruction error. The goal is to propose a straightforward and widely accessible numerical method to calculate an optimized microlens array for a known aberration statistics. The optimization comprises the minimization of the wavefront reconstruction error and/or the number of necessary microlenses in the array. We numerically generate, sample and reconstruct the wavefront, and use a genetic algorithm to discover the optimal array geometry. Within an ophthalmological context, as a case study, we demonstrate that an array with only 10 suitably located microlenses can be used to produce reconstruction errors as small as those of a 36-microlens regular array. The same optimization procedure can be employed for any application where the wavefront statistics is known.
Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.
2017-12-01
We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.
Li, Shuang-Shuang; Pan, Shuo; Ma, Yi-Tong; Yang, Yi-Ning; Ma, Xiang; Li, Xiao-Mei; Fu, Zhen-Yan; Xie, Xiang; Liu, Fen; Chen, You; Chen, Bang-Dang; Yu, Zi-Xiang; He, Chun-Hui; Zheng, Ying-Ying; Abudukeremu, Nuremanguli; Abuzhalihan, Jialin; Wang, Yong-Tao
2014-07-29
The optimal cutoff of the waist-to-hip ratio (WHR) among Han adults in Xinjiang, which is located in the center of Asia, is unknown. We aimed to examine the relationship between different WHRs and cardiovascular risk factors among Han adults in Xinjiang, and determine the optimal cutoff of the WHR. The Cardiovascular Risk Survey was conducted from October 2007 to March 2010. A total of 14618 representative participants were selected using a four-stage stratified sampling method. A total of 5757 Han participants were included in the study. The present statistical analysis was restricted to the 5595 Han subjects who had complete anthropometric data. The sensitivity, specificity, and distance on the receiver operating characteristic (ROC) curve in each WHR level were calculated. The shortest distance in the ROC curves was used to determine the optimal cutoff of the WHR for detecting cardiovascular risk factors. In women, the WHR was positively associated with systolic blood pressure, diastolic blood pressure, and serum concentrations of serum total cholesterol. The prevalence of hypertension and hypertriglyceridemia increased as the WHR increased. The same results were not observed among men. The optimal WHR cutoffs for predicting hypertension, diabetes, dyslipidemia and ≥ two of these risk factors for Han adults in Xinjiang were 0.92, 0.92, 0.91, 0.92 in men and 0.88, 0.89, 0.88, 0.89 in women, respectively. Higher cutoffs for the WHR are required in the identification of Han adults aged ≥ 35 years with a high risk of cardiovascular diseases in Xinjiang.
NASA Astrophysics Data System (ADS)
Zhang, Pengpeng
The Leksell Gamma KnifeRTM (LGK) is a tool for providing accurate stereotactic radiosurgical treatment of brain lesions, especially tumors. Currently, the treatment planning team "forward" plans radiation treatment parameters while viewing a series of 2D MR scans. This primarily manual process is cumbersome and time consuming because the difficulty in visualizing the large search space for the radiation parameters (i.e., shot overlap, number, location, size, and weight). I hypothesize that a computer-aided "inverse" planning procedure that utilizes tumor geometry and treatment goals could significantly improve the planning process and therapeutic outcome of LGK radiosurgery. My basic observation is that the treatment team is best at identification of the location of the lesion and prescribing a lethal, yet safe, radiation dose. The treatment planning computer is best at determining both the 3D tumor geometry and optimal LGK shot parameters necessary to deliver a desirable dose pattern to the tumor while sparing adjacent normal tissue. My treatment planning procedure asks the neurosurgeon to identify the tumor and critical structures in MR images and the oncologist to prescribe a tumoricidal radiation dose. Computer-assistance begins with geometric modeling of the 3D tumor's medial axis properties. This begins with a new algorithm, a Gradient-Phase Plot (G-P Plot) decomposition of the tumor object's medial axis. I have found that medial axis seeding, while insufficient in most cases to produce an acceptable treatment plan, greatly reduces the solution space for Guided Evolutionary Simulated Annealing (GESA) treatment plan optimization by specifying an initial estimate for shot number, size, and location, but not weight. They are used to generate multiple initial plans which become initial seed plans for GESA. The shot location and weight parameters evolve and compete in the GESA procedure. The GESA objective function optimizes tumor irradiation (i.e., as close to the prescribed dose as possible) and minimizes normal tissue and critical structure damage. In tests of five patient data sets (4 acoustic neuromas and 1 meningioma), the G-P Plot/GESA-generated treatment plans improved conformality of the lethal dose to the tumor, required no human interaction, improved dose homogeneity, suggested use of fewer shots, and reduced treatment administration time.
He, Jianjun; Gu, Hong; Liu, Wenqi
2012-01-01
It is well known that an important step toward understanding the functions of a protein is to determine its subcellular location. Although numerous prediction algorithms have been developed, most of them typically focused on the proteins with only one location. In recent years, researchers have begun to pay attention to the subcellular localization prediction of the proteins with multiple sites. However, almost all the existing approaches have failed to take into account the correlations among the locations caused by the proteins with multiple sites, which may be the important information for improving the prediction accuracy of the proteins with multiple sites. In this paper, a new algorithm which can effectively exploit the correlations among the locations is proposed by using gaussian process model. Besides, the algorithm also can realize optimal linear combination of various feature extraction technologies and could be robust to the imbalanced data set. Experimental results on a human protein data set show that the proposed algorithm is valid and can achieve better performance than the existing approaches.
Adaptive mass expulsion attitude control system
NASA Technical Reports Server (NTRS)
Rodden, John J. (Inventor); Stevens, Homer D. (Inventor); Carrou, Stephane (Inventor)
2001-01-01
An attitude control system and method operative with a thruster controls the attitude of a vehicle carrying the thruster, wherein the thruster has a valve enabling the formation of pulses of expelled gas from a source of compressed gas. Data of the attitude of the vehicle is gathered, wherein the vehicle is located within a force field tending to orient the vehicle in a first attitude different from a desired attitude. The attitude data is evaluated to determine a pattern of values of attitude of the vehicle in response to the gas pulses of the thruster and in response to the force field. The system and the method maintain the attitude within a predetermined band of values of attitude which includes the desired attitude. Computation circuitry establishes an optimal duration of each of the gas pulses based on the pattern of values of attitude, the optimal duration providing for a minimal number of opening and closure operations of the valve. The thruster is operated to provide gas pulses having the optimal duration.
Simple control laws for low-thrust orbit transfers
NASA Technical Reports Server (NTRS)
Petropoulos, Anastassios E.
2003-01-01
Two methods are presented by which to determine both a thrust direction and when to apply thrust to effect specified changes in any of the orbit elements except for true anomaly, which is assumed free. The central body is assumed to be a point mass, and the initial and final orbits are assumed closed. Thrust, when on, is of a constant value, and specific impulse is constant. The thrust profiles derived from the two methods are not propellant-optimal, but are based firstly on the optimal thrust directions and location on the osculating orbit for changing each of the orbit elements and secondly on the desired changes in the orbit elements. Two examples of transfers are presented, one in semimajor axis and inclination, and one in semimajor axis and eccentricity. The latter compares favourably with a propellant-optimized transfer between the same orbits. The control laws have few input parameters, but can still capture the complexity of a wide variety of orbit transfers.
Artificial Intelligence based technique for BTS placement
NASA Astrophysics Data System (ADS)
Alenoghena, C. O.; Emagbetere, J. O.; Aibinu, A. M.
2013-12-01
The increase of the base transceiver station (BTS) in most urban areas can be traced to the drive by network providers to meet demand for coverage and capacity. In traditional network planning, the final decision of BTS placement is taken by a team of radio planners, this decision is not fool proof against regulatory requirements. In this paper, an intelligent based algorithm for optimal BTS site placement has been proposed. The proposed technique takes into consideration neighbour and regulation considerations objectively while determining cell site. The application will lead to a quantitatively unbiased evaluated decision making process in BTS placement. An experimental data of a 2km by 3km territory was simulated for testing the new algorithm, results obtained show a 100% performance of the neighbour constrained algorithm in BTS placement optimization. Results on the application of GA with neighbourhood constraint indicate that the choices of location can be unbiased and optimization of facility placement for network design can be carried out.
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
NASA Astrophysics Data System (ADS)
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses with LAI and clip harvest data to determine whether LAI can be used as a suitable proxy for aboveground standing biomass. We also compared optimal sample sizes derived from LAI data, and clip-harvest data from two different size clip harvest areas (0.1m by 1m vs. 0.1m by 2m). Sample sizes were calculated in order to estimate the mean to within a standardized level of uncertainty that will be used to guide sampling effort across all vegetation types (i.e. estimated within × 10% with 95% confidence). Finally, we employed a Semivariogram approach to determine optimal sample size and spacing.
Recent developments in imaging system assessment methodology, FROC analysis and the search model.
Chakraborty, Dev P
2011-08-21
A frequent problem in imaging is assessing whether a new imaging system is an improvement over an existing standard. Observer performance methods, in particular the receiver operating characteristic (ROC) paradigm, are widely used in this context. In ROC analysis lesion location information is not used and consequently scoring ambiguities can arise in tasks, such as nodule detection, involving finding localized lesions. This paper reviews progress in the free-response ROC (FROC) paradigm in which the observer marks and rates suspicious regions and the location information is used to determine whether lesions were correctly localized. Reviewed are FROC data analysis, a search-model for simulating FROC data, predictions of the model and a method for estimating the parameters. The search model parameters are physically meaningful quantities that can guide system optimization.
FEC decoder design optimization for mobile satellite communications
NASA Technical Reports Server (NTRS)
Roy, Ashim; Lewi, Leng
1990-01-01
A new telecommunications service for location determination via satellite is being proposed for the continental USA and Europe, which provides users with the capability to find the location of, and communicate from, a moving vehicle to a central hub and vice versa. This communications system is expected to operate in an extremely noisy channel in the presence of fading. In order to achieve high levels of data integrity, it is essential to employ forward error correcting (FEC) encoding and decoding techniques in such mobile satellite systems. A constraint length k = 7 FEC decoder has been implemented in a single chip for such systems. The single chip implementation of the maximum likelihood decoder helps to minimize the cost, size, and power consumption, and improves the bit error rate (BER) performance of the mobile earth terminal (MET).
On localizing a capsule endoscope using magnetic sensors.
Moussakhani, Babak; Ramstad, Tor; Flåm, John T; Balasingham, Ilangko
2012-01-01
In this work, localizing a capsule endoscope within the gastrointestinal tract is addressed. It is assumed that the capsule is equipped with a magnet, and that a magnetic sensor network measures the flux from this magnet. We assume no prior knowledge on the source location, and that the measurements collected by the sensors are corrupted by thermal Gaussian noise only. Under these assumptions, we focus on determining the Cramer-Rao Lower Bound (CRLB) for the location of the endoscope. Thus, we are not studying specific estimators, but rather the theoretical performance of an optimal one. It is demonstrated that the CRLB is a function of the distance and angle between the sensor network and the magnet. By studying the CRLB with respect to different sensor array constellations, we are able to indicate favorable constellations.
NASA Astrophysics Data System (ADS)
Vasil'ev, E. N.
2018-04-01
Numerical simulation is performed for heat transfer in a heat distributer of a thermoelectric cooling system, which is located between the heat-loaded element and the thermoelectric module, for matching their sizes and for heat flux equalization. The dependences of the characteristic values of temperature and thermal resistance of the copper and aluminum heat distributer on its thickness and on the size of the heatloaded element. Comparative analysis is carried out for determining the effect of the thermal conductivity of the material and geometrical parameters on the heat resistance. The optimal thickness of the heat distributer depending on the size of the heat-loaded element is determined.
Smalling, K.L.; Kuivila, K.M.
2008-01-01
A multi-residue method was developed for the simultaneous determination of 85 current-use and legacy organochlorine pesticides in a single sediment sample. After microwave-assisted extraction, clean-up of samples was optimized using gel permeation chromatography and either stacked carbon and alumina solid-phase extraction cartridges or a deactivated Florisil column. Analytes were determined by gas chromatography with ion-trap mass spectrometry and electron capture detection. Method detection limits ranged from 0.6 to 8.9 ??g/kg dry weight. Bed and suspended sediments from a variety of locations were analyzed to validate the method and 29 pesticides, including at least 1 from every class, were detected.
Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian
2016-10-27
To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads.
Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian
2016-01-01
To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads. PMID:27801794
Design of a multifiber light delivery system for photoacoustic-guided surgery.
Eddins, Blackberrie; Bell, Muyinatu A Lediju
2017-04-01
This work explores light delivery optimization for photoacoustic-guided minimally invasive surgeries, such as the endonasal transsphenoidal approach. Monte Carlo simulations were employed to study three-dimensional light propagation in tissue, comprising one or two 4-mm diameter arteries located 3 mm below bone, an absorbing metallic drill contacting the bone surface, and a single light source placed next to the 2.4-mm diameter drill shaft with a 2.9-mm diameter spherical drill tip. The optimal fiber distance from the drill shaft was determined from the maximum normalized fluence to the underlying artery. Using this optimal fiber-to-drill shaft distance, Zemax simulations were employed to propagate Gaussian beams through one or more 600 micron-core diameter optical fibers for detection on the bone surface. When the number of equally spaced fibers surrounding the drill increased, a single merged optical profile formed with seven or more fibers, determined by thresholding the resulting light profile images at 1 / e times the maximum intensity. We used these simulations to inform design requirements, build a one to seven multifiber light delivery prototype to surround a surgical drill, and demonstrate its ability to simultaneously visualize the tool tip and blood vessel targets in the absence and presence of bone. The results and methodology are generalizable to multiple interventional photoacoustic applications.
NASA Astrophysics Data System (ADS)
Koyuncu, A.; Cigeroglu, E.; Özgüven, H. N.
2017-10-01
In this study, a new approach is proposed for identification of structural nonlinearities by employing cascaded optimization and neural networks. Linear finite element model of the system and frequency response functions measured at arbitrary locations of the system are used in this approach. Using the finite element model, a training data set is created, which appropriately spans the possible nonlinear configurations space of the system. A classification neural network trained on these data sets then localizes and determines the types of all nonlinearities associated with the nonlinear degrees of freedom in the system. A new training data set spanning the parametric space associated with the determined nonlinearities is created to facilitate parametric identification. Utilizing this data set, initially, a feed forward regression neural network is trained, which parametrically identifies the classified nonlinearities. Then, the results obtained are further improved by carrying out an optimization which uses network identified values as starting points. Unlike identification methods available in literature, the proposed approach does not require data collection from the degrees of freedoms where nonlinear elements are attached, and furthermore, it is sufficiently accurate even in the presence of measurement noise. The application of the proposed approach is demonstrated on an example system with nonlinear elements and on a real life experimental setup with a local nonlinearity.
Design of a multifiber light delivery system for photoacoustic-guided surgery
NASA Astrophysics Data System (ADS)
Eddins, Blackberrie; Bell, Muyinatu A. Lediju
2017-04-01
This work explores light delivery optimization for photoacoustic-guided minimally invasive surgeries, such as the endonasal transsphenoidal approach. Monte Carlo simulations were employed to study three-dimensional light propagation in tissue, comprising one or two 4-mm diameter arteries located 3 mm below bone, an absorbing metallic drill contacting the bone surface, and a single light source placed next to the 2.4-mm diameter drill shaft with a 2.9-mm diameter spherical drill tip. The optimal fiber distance from the drill shaft was determined from the maximum normalized fluence to the underlying artery. Using this optimal fiber-to-drill shaft distance, Zemax simulations were employed to propagate Gaussian beams through one or more 600 micron-core diameter optical fibers for detection on the bone surface. When the number of equally spaced fibers surrounding the drill increased, a single merged optical profile formed with seven or more fibers, determined by thresholding the resulting light profile images at 1/e times the maximum intensity. We used these simulations to inform design requirements, build a one to seven multifiber light delivery prototype to surround a surgical drill, and demonstrate its ability to simultaneously visualize the tool tip and blood vessel targets in the absence and presence of bone. The results and methodology are generalizable to multiple interventional photoacoustic applications.
A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania.
Bradford, Kathryn; Abrahams, Leslie; Hegglin, Miriam; Klima, Kelly
2015-10-06
With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare data sets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.
Three-Phase AC Optimal Power Flow Based Distribution Locational Marginal Price: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Rui; Zhang, Yingchen
2017-05-17
Designing market mechanisms for electricity distribution systems has been a hot topic due to the increased presence of smart loads and distributed energy resources (DERs) in distribution systems. The distribution locational marginal pricing (DLMP) methodology is one of the real-time pricing methods to enable such market mechanisms and provide economic incentives to active market participants. Determining the DLMP is challenging due to high power losses, the voltage volatility, and the phase imbalance in distribution systems. Existing DC Optimal Power Flow (OPF) approaches are unable to model power losses and the reactive power, while single-phase AC OPF methods cannot capture themore » phase imbalance. To address these challenges, in this paper, a three-phase AC OPF based approach is developed to define and calculate DLMP accurately. The DLMP is modeled as the marginal cost to serve an incremental unit of demand at a specific phase at a certain bus, and is calculated using the Lagrange multipliers in the three-phase AC OPF formulation. Extensive case studies have been conducted to understand the impact of system losses and the phase imbalance on DLMPs as well as the potential benefits of flexible resources.« less
Image-guided laser projection for port placement in minimally invasive surgery.
Marmurek, Jonathan; Wedlake, Chris; Pardasani, Utsav; Eagleson, Roy; Peters, Terry
2006-01-01
We present an application of an augmented reality laser projection system in which procedure-specific optimal incision sites, computed from pre-operative image acquisition, are superimposed on a patient to guide port placement in minimally invasive surgery. Tests were conducted to evaluate the fidelity of computed and measured port configurations, and to validate the accuracy with which a surgical tool-tip can be placed at an identified virtual target. A high resolution volumetric image of a thorax phantom was acquired using helical computed tomography imaging. Oriented within the thorax, a phantom organ with marked targets was visualized in a virtual environment. A graphical interface enabled marking the locations of target anatomy, and calculation of a grid of potential port locations along the intercostal rib lines. Optimal configurations of port positions and tool orientations were determined by an objective measure reflecting image-based indices of surgical dexterity, hand-eye alignment, and collision detection. Intra-operative registration of the computed virtual model and the phantom anatomy was performed using an optical tracking system. Initial trials demonstrated that computed and projected port placement provided direct access to target anatomy with an accuracy of 2 mm.
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali mohammad
2014-01-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14% reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali Mohammad
2014-05-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14 % reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania
NASA Astrophysics Data System (ADS)
Klima, K.; Abrahams, L.; Bradford, K.; Hegglin, M.
2015-12-01
With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/ Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare datasets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.
Optimization of locations of diffusion spots in indoor optical wireless local area networks
NASA Astrophysics Data System (ADS)
Eltokhey, Mahmoud W.; Mahmoud, K. R.; Ghassemlooy, Zabih; Obayya, Salah S. A.
2018-03-01
In this paper, we present a novel optimization of the locations of the diffusion spots in indoor optical wireless local area networks, based on the central force optimization (CFO) scheme. The users' performance uniformity is addressed by using the CFO algorithm, and adopting different objective function's configurations, while considering maximization and minimization of the signal to noise ratio and the delay spread, respectively. We also investigate the effect of varying the objective function's weights on the system and the users' performance as part of the adaptation process. The results show that the proposed objective function configuration-based optimization procedure offers an improvement of 65% in the standard deviation of individual receivers' performance.
Optimized Non-Obstructive Particle Damping (NOPD) Treatment for Composite Honeycomb Structures
NASA Technical Reports Server (NTRS)
Panossian, H.
2008-01-01
Non-Obstructive Particle Damping (NOPD) technology is a passive vibration damping approach whereby metallic or non-metallic particles in spherical or irregular shapes, of heavy or light consistency, and even liquid particles are placed inside cavities or attached to structures by an appropriate means at strategic locations, to absorb vibration energy. The objective of the work described herein is the development of a design optimization procedure and discussion of test results for such a NOPD treatment on honeycomb (HC) composite structures, based on finite element modeling (FEM) analyses, optimization and tests. Modeling and predictions were performed and tests were carried out to correlate the test data with the FEM. The optimization procedure consisted of defining a global objective function, using finite difference methods, to determine the optimal values of the design variables through quadratic linear programming. The optimization process was carried out by targeting the highest dynamic displacements of several vibration modes of the structure and finding an optimal treatment configuration that will minimize them. An optimal design was thus derived and laboratory tests were conducted to evaluate its performance under different vibration environments. Three honeycomb composite beams, with Nomex core and aluminum face sheets, empty (untreated), uniformly treated with NOPD, and optimally treated with NOPD, according to the analytically predicted optimal design configuration, were tested in the laboratory. It is shown that the beam with optimal treatment has the lowest response amplitude. Described below are results of modal vibration tests and FEM analyses from predictions of the modal characteristics of honeycomb beams under zero, 50% uniform treatment and an optimal NOPD treatment design configuration and verification with test data.
Continuum topology optimization considering uncertainties in load locations based on the cloud model
NASA Astrophysics Data System (ADS)
Liu, Jie; Wen, Guilin
2018-06-01
Few researchers have paid attention to designing structures in consideration of uncertainties in the loading locations, which may significantly influence the structural performance. In this work, cloud models are employed to depict the uncertainties in the loading locations. A robust algorithm is developed in the context of minimizing the expectation of the structural compliance, while conforming to a material volume constraint. To guarantee optimal solutions, sufficient cloud drops are used, which in turn leads to low efficiency. An innovative strategy is then implemented to enormously improve the computational efficiency. A modified soft-kill bi-directional evolutionary structural optimization method using derived sensitivity numbers is used to output the robust novel configurations. Several numerical examples are presented to demonstrate the effectiveness and efficiency of the proposed algorithm.
Runway exit designs for capacity improvement demonstrations. Phase 2: Computer model development
NASA Technical Reports Server (NTRS)
Trani, A. A.; Hobeika, A. G.; Kim, B. J.; Nunna, V.; Zhong, C.
1992-01-01
The development is described of a computer simulation/optimization model to: (1) estimate the optimal locations of existing and proposed runway turnoffs; and (2) estimate the geometric design requirements associated with newly developed high speed turnoffs. The model described, named REDIM 2.0, represents a stand alone application to be used by airport planners, designers, and researchers alike to estimate optimal turnoff locations. The main procedures are described in detail which are implemented in the software package and possible applications are illustrated when using 6 major runway scenarios. The main output of the computer program is the estimation of the weighted average runway occupancy time for a user defined aircraft population. Also, the location and geometric characteristics of each turnoff are provided to the user.
Optimization of Glioblastoma Mouse Orthotopic Xenograft Models for Translational Research.
Irtenkauf, Susan M; Sobiechowski, Susan; Hasselbach, Laura A; Nelson, Kevin K; Transou, Andrea D; Carlton, Enoch T; Mikkelsen, Tom; deCarvalho, Ana C
2017-08-01
Glioblastoma is an aggressive primary brain tumor predominantly localized to the cerebral cortex. We developed a panel of patient-derived mouse orthotopic xenografts (PDOX) for preclinical drug studies by implanting cancer stem cells (CSC) cultured from fresh surgical specimens intracranially into 8-wk-old female athymic nude mice. Here we optimize the glioblastoma PDOX model by assessing the effect of implantation location on tumor growth, survival, and histologic characteristics. To trace the distribution of intracranial injections, toluidine blue dye was injected at 4 locations with defined mediolateral, anterioposterior, and dorsoventral coordinates within the cerebral cortex. Glioblastoma CSC from 4 patients and a glioblastoma nonstem-cell line were then implanted by using the same coordinates for evaluation of tumor location, growth rate, and morphologic and histologic features. Dye injections into one of the defined locations resulted in dye dissemination throughout the ventricles, whereas tumor cell implantation at the same location resulted in a much higher percentage of small multifocal ventricular tumors than did the other 3 locations tested. Ventricular tumors were associated with a lower tumor growth rate, as measured by in vivo bioluminescence imaging, and decreased survival in 4 of 5 cell lines. In addition, tissue oxygenation, vasculature, and the expression of astrocytic markers were altered in ventricular tumors compared with nonventricular tumors. Based on this information, we identified an optimal implantation location that avoided the ventricles and favored cortical tumor growth. To assess the effects of stress from oral drug administration, mice that underwent daily gavage were compared with stress-positive and -negative control groups. Oral gavage procedures did not significantly affect the survival of the implanted mice or physiologic measurements of stress. Our findings document the importance of optimization of the implantation site for preclinical mouse models of glioblastoma.
Optimization of Glioblastoma Mouse Orthotopic Xenograft Models for Translational Research
Irtenkauf, Susan M; Sobiechowski, Susan; Hasselbach, Laura A; Nelson, Kevin K; Transou, Andrea D; Carlton, Enoch T; Mikkelsen, Tom; deCarvalho, Ana C
2017-01-01
Glioblastoma is an aggressive primary brain tumor predominantly localized to the cerebral cortex. We developed a panel of patient-derived mouse orthotopic xenografts (PDOX) for preclinical drug studies by implanting cancer stem cells (CSC) cultured from fresh surgical specimens intracranially into 8-wk-old female athymic nude mice. Here we optimize the glioblastoma PDOX model by assessing the effect of implantation location on tumor growth, survival, and histologic characteristics. To trace the distribution of intracranial injections, toluidine blue dye was injected at 4 locations with defined mediolateral, anterioposterior, and dorsoventral coordinates within the cerebral cortex. Glioblastoma CSC from 4 patients and a glioblastoma nonstem-cell line were then implanted by using the same coordinates for evaluation of tumor location, growth rate, and morphologic and histologic features. Dye injections into one of the defined locations resulted in dye dissemination throughout the ventricles, whereas tumor cell implantation at the same location resulted in a much higher percentage of small multifocal ventricular tumors than did the other 3 locations tested. Ventricular tumors were associated with a lower tumor growth rate, as measured by in vivo bioluminescence imaging, and decreased survival in 4 of 5 cell lines. In addition, tissue oxygenation, vasculature, and the expression of astrocytic markers were altered in ventricular tumors compared with nonventricular tumors. Based on this information, we identified an optimal implantation location that avoided the ventricles and favored cortical tumor growth. To assess the effects of stress from oral drug administration, mice that underwent daily gavage were compared with stress-positive and ‑negative control groups. Oral gavage procedures did not significantly affect the survival of the implanted mice or physiologic measurements of stress. Our findings document the importance of optimization of the implantation site for preclinical mouse models of glioblastoma. PMID:28830577
Mathematical model of the metal mould surface temperature optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mlynek, Jaroslav, E-mail: jaroslav.mlynek@tul.cz; Knobloch, Roman, E-mail: roman.knobloch@tul.cz; Srb, Radek, E-mail: radek.srb@tul.cz
2015-11-30
The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensitymore » is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article.« less
Erfani, Seyed Mohammad Hassan; Danesh, Shahnaz; Karrabi, Seyed Mohsen; Shad, Rouzbeh
2017-07-01
One of the major challenges in big cities is planning and implementation of an optimized, integrated solid waste management system. This optimization is crucial if environmental problems are to be prevented and the expenses to be reduced. A solid waste management system consists of many stages including collection, transfer and disposal. In this research, an integrated model was proposed and used to optimize two functional elements of municipal solid waste management (storage and collection systems) in the Ahmadabad neighbourhood located in the City of Mashhad - Iran. The integrated model was performed by modelling and solving the location allocation problem and capacitated vehicle routing problem (CVRP) through Geographic Information Systems (GIS). The results showed that the current collection system is not efficient owing to its incompatibility with the existing urban structure and population distribution. Application of the proposed model could significantly improve the storage and collection system. Based on the results of minimizing facilities analyses, scenarios with 100, 150 and 180 m walking distance were considered to find optimal bin locations for Alamdasht, C-metri and Koohsangi. The total number of daily collection tours was reduced to seven as compared to the eight tours carried out in the current system (12.50% reduction). In addition, the total number of required crews was minimized and reduced by 41.70% (24 crews in the current collection system vs 14 in the system provided by the model). The total collection vehicle routing was also optimized such that the total travelled distances during night and day working shifts was cut back by 53%.
Rantner, Lukas J; Vadakkumpadan, Fijoy; Spevak, Philip J; Crosson, Jane E; Trayanova, Natalia A
2013-01-01
There is currently no reliable way of predicting the optimal implantable cardioverter-defibrillator (ICD) placement in paediatric and congenital heart defect (CHD) patients. This study aimed to: (1) develop a new image processing pipeline for constructing patient-specific heart–torso models from clinical magnetic resonance images (MRIs); (2) use the pipeline to determine the optimal ICD configuration in a paediatric tricuspid valve atresia patient; (3) establish whether the widely used criterion of shock-induced extracellular potential (Φe) gradients ≥5 V cm−1 in ≥95% of ventricular volume predicts defibrillation success. A biophysically detailed heart–torso model was generated from patient MRIs. Because transvenous access was impossible, three subcutaneous and three epicardial lead placement sites were identified along with five ICD scan locations. Ventricular fibrillation was induced, and defibrillation shocks were applied from 11 ICD configurations to determine defibrillation thresholds (DFTs). Two configurations with epicardial leads resulted in the lowest DFTs overall and were thus considered optimal. Three configurations shared the lowest DFT among subcutaneous lead ICDs. The Φe gradient criterion was an inadequate predictor of defibrillation success, as defibrillation failed in numerous instances even when 100% of the myocardium experienced such gradients. In conclusion, we have developed a new image processing pipeline and applied it to a CHD patient to construct the first active heart–torso model from clinical MRIs. PMID:23798492
Miéville, Frédéric A; Bolard, Gregory; Bulling, Shelley; Gudinchet, François; Bochud, François O; Verdun, François R
2013-11-01
The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT. Copyright © 2012 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chou, Tien-Yin; Lin, Wen-Tzu; Lin, Chao-Yuan; Chou, Wen-Chieh; Huang, Pi-Hui
2004-02-01
With the fast growing progress of computer technologies, spatial information on watersheds such as flow direction, watershed boundaries and the drainage network can be automatically calculated or extracted from a digital elevation model (DEM). The stubborn problem that depressions exist in DEMs has been frequently encountered while extracting the spatial information of terrain. Several filling-up methods have been proposed for solving depressions. However, their suitability for large-scale flat areas is inadequate. This study proposes a depression watershed method coupled with the Preference Ranking Organization METHod for Enrichment Evaluations (PROMETHEEs) theory to determine the optimal outlet and calculate the flow direction in depressions. Three processing procedures are used to derive the depressionless flow direction: (1) calculating the incipient flow direction; (2) establishing the depression watershed by tracing the upstream drainage area and determining the depression outlet using PROMETHEE theory; (3) calculating the depressionless flow direction. The developed method was used to delineate the Shihmen Reservoir watershed located in Northern Taiwan. The results show that the depression watershed method can effectively solve the shortcomings such as depression outlet differentiating and looped flow direction between depressions. The suitability of the proposed approach was verified.
Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods
Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.
2013-01-01
Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822
Optimal filter parameters for low SNR seismograms as a function of station and event location
NASA Astrophysics Data System (ADS)
Leach, Richard R.; Dowla, Farid U.; Schultz, Craig A.
1999-06-01
Global seismic monitoring requires deployment of seismic sensors worldwide, in many areas that have not been studied or have few useable recordings. Using events with lower signal-to-noise ratios (SNR) would increase the amount of data from these regions. Lower SNR events can add significant numbers to data sets, but recordings of these events must be carefully filtered. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. To reduce this laborious process, we have developed an automated method to provide optimal filters for low SNR regional or teleseismic events. As seismic signals are often localized in frequency and time with distinct time-frequency characteristics, our method is based on the decomposition of a time series into a set of subsignals, each representing a band with f/Δ f constant (constant Q). The SNR is calculated on the pre-event noise and signal window. The band pass signals with high SNR are used to indicate the cutoff filter limits for the optimized filter. Results indicate a significant improvement in SNR, particularly for low SNR events. The method provides an optimum filter which can be immediately applied to unknown regions. The filtered signals are used to map the seismic frequency response of a region and may provide improvements in travel-time picking, azimuth estimation, regional characterization, and event detection. For example, when an event is detected and a preliminary location is determined, the computer could automatically select optimal filter bands for data from non-reporting stations. Results are shown for a set of low SNR events as well as 379 regional and teleseismic events recorded at stations ABKT, KIV, and ANTO in the Middle East.
NASA Astrophysics Data System (ADS)
Kikuchi, C.; Ferre, P. A.; Vrugt, J. A.
2011-12-01
Hydrologic models are developed, tested, and refined based on the ability of those models to explain available hydrologic data. The optimization of model performance based upon mismatch between model outputs and real world observations has been extensively studied. However, identification of plausible models is sensitive not only to the models themselves - including model structure and model parameters - but also to the location, timing, type, and number of observations used in model calibration. Therefore, careful selection of hydrologic observations has the potential to significantly improve the performance of hydrologic models. In this research, we seek to reduce prediction uncertainty through optimization of the data collection process. A new tool - multiple model analysis with discriminatory data collection (MMA-DDC) - was developed to address this challenge. In this approach, multiple hydrologic models are developed and treated as competing hypotheses. Potential new data are then evaluated on their ability to discriminate between competing hypotheses. MMA-DDC is well-suited for use in recursive mode, in which new observations are continuously used in the optimization of subsequent observations. This new approach was applied to a synthetic solute transport experiment, in which ranges of parameter values constitute the multiple hydrologic models, and model predictions are calculated using likelihood-weighted model averaging. MMA-DDC was used to determine the optimal location, timing, number, and type of new observations. From comparison with an exhaustive search of all possible observation sequences, we find that MMA-DDC consistently selects observations which lead to the highest reduction in model prediction uncertainty. We conclude that using MMA-DDC to evaluate potential observations may significantly improve the performance of hydrologic models while reducing the cost associated with collecting new data.
Optimising reef-scale CO2 removal by seaweed to buffer ocean acidification
NASA Astrophysics Data System (ADS)
Mongin, Mathieu; Baird, Mark E.; Hadley, Scott; Lenton, Andrew
2016-03-01
The equilibration of rising atmospheric {{CO}}2 with the ocean is lowering {pH} in tropical waters by about 0.01 every decade. Coral reefs and the ecosystems they support are regarded as one of the most vulnerable ecosystems to ocean acidification, threatening their long-term viability. In response to this threat, different strategies for buffering the impact of ocean acidification have been proposed. As the {pH} experienced by individual corals on a natural reef system depends on many processes over different time scales, the efficacy of these buffering strategies remains largely unknown. Here we assess the feasibility and potential efficacy of a reef-scale (a few kilometers) carbon removal strategy, through the addition of seaweed (fleshy multicellular algae) farms within the Great Barrier Reef at the Heron Island reef. First, using diagnostic time-dependent age tracers in a hydrodynamic model, we determine the optimal location and size of the seaweed farm. Secondly, we analytically calculate the optimal density of the seaweed and harvesting strategy, finding, for the seaweed growth parameters used, a biomass of 42 g N m-2 with a harvesting rate of up 3.2 g N m-2 d-1 maximises the carbon sequestration and removal. Numerical experiments show that an optimally located 1.9 km2 farm and optimally harvested seaweed (removing biomass above 42 g N m-2 every 7 d) increased aragonite saturation by 0.1 over 24 km2 of the Heron Island reef. Thus, the most effective seaweed farm can only delay the impacts of global ocean acidification at the reef scale by 7-21 years, depending on future global carbon emissions. Our results highlight that only a kilometer-scale farm can partially mitigate global ocean acidification for a particular reef.
NASA Astrophysics Data System (ADS)
O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.
2013-01-01
We describe a method for determining an optimal centroid-moment tensor solution of an earthquake from a set of static displacements measured using a network of Global Positioning System receivers. Using static displacements observed after the 4 April 2010, MW 7.2 El Mayor-Cucapah, Mexico, earthquake, we perform an iterative inversion to obtain the source mechanism and location, which minimize the least-squares difference between data and synthetics. The efficiency of our algorithm for forward modeling static displacements in a layered elastic medium allows the inversion to be performed in real-time on a single processor without the need for precomputed libraries of excitation kernels; we present simulated real-time results for the El Mayor-Cucapah earthquake. The only a priori information that our inversion scheme needs is a crustal model and approximate source location, so the method proposed here may represent an improvement on existing early warning approaches that rely on foreknowledge of fault locations and geometries.
NASA Technical Reports Server (NTRS)
Turso, James; Lawrence, Charles; Litt, Jonathan
2004-01-01
The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.
NASA Technical Reports Server (NTRS)
Turso, James A.; Lawrence, Charles; Litt, Jonathan S.
2007-01-01
The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/ health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite-element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.
Location, location, location: finding a suitable home among the noise
Stanley, Jenni A.; Radford, Craig A.; Jeffs, Andrew G.
2012-01-01
While sound is a useful cue for guiding the onshore orientation of larvae because it travels long distances underwater, it also has the potential to convey valuable information about the quality and type of the habitat at the source. Here, we provide, to our knowledge, the first evidence that settlement-stage coastal crab species can interpret and show a strong settlement and metamorphosis response to habitat-related differences in natural underwater sound. Laboratory- and field-based experiments demonstrated that time to metamorphosis in the settlement-stage larvae of common coastal crab species varied in response to different underwater sound signatures produced by different habitat types. The megalopae of five species of both temperate and tropical crabs showed a significant decrease in time to metamorphosis, when exposed to sound from their optimal settlement habitat type compared with other habitat types. These results indicate that sounds emanating from specific underwater habitats may play a major role in determining spatial patterns of recruitment in coastal crab species. PMID:22673354
McDermott, Gerry; Le Gros, Mark A.; Larabell, Carolyn A.
2012-01-01
Living cells are structured to create a range of microenvironments that support specific chemical reactions and processes. Understanding how cells function therefore requires detailed knowledge of both the subcellular architecture and the location of specific molecules within this framework. Here we review the development of two correlated cellular imaging techniques that fulfill this need. Cells are first imaged using cryogenic fluorescence microscopy to determine the location of molecules of interest that have been labeled with fluorescent tags. The same specimen is then imaged using soft X-ray tomography to generate a high-contrast, 3D reconstruction of the cells. Data from the two modalities are then combined to produce a composite, information-rich view of the cell. This correlated imaging approach can be applied across the spectrum of problems encountered in cell biology, from basic research to biotechnological and biomedical applications such as the optimization of biofuels and the development of new pharmaceuticals. PMID:22242730
Wu, Jianfa; Peng, Dahao; Ma, Jianhao; Zhao, Li; Sun, Ce; Ling, Huanzhang
2015-01-01
To effectively monitor the atmospheric quality of small-scale areas, it is necessary to optimize the locations of the monitoring sites. This study combined geographic parameters extraction by GIS with fuzzy matter-element analysis. Geographic coordinates were extracted by GIS and transformed into rectangular coordinates. These coordinates were input into the Gaussian plume model to calculate the pollutant concentration at each site. Fuzzy matter-element analysis, which is used to solve incompatible problems, was used to select the locations of sites. The matter element matrices were established according to the concentration parameters. The comprehensive correlation functions KA (xj) and KB (xj), which reflect the degree of correlation among monitoring indices, were solved for each site, and a scatter diagram of the sites was drawn to determine the final positions of the sites based on the functions. The sites could be classified and ultimately selected by the scatter diagram. An actual case was tested, and the results showed that 5 positions can be used for monitoring, and the locations conformed to the technical standard. In the results of this paper, the hierarchical clustering method was used to improve the methods. The sites were classified into 5 types, and 7 locations were selected. Five of the 7 locations were completely identical to the sites determined by fuzzy matter-element analysis. The selections according to these two methods are similar, and these methods can be used in combination. In contrast to traditional methods, this study monitors the isolated point pollutant source within a small range, which can reduce the cost of monitoring.
Development and evaluation of modified envelope correlation method for deep tectonic tremor
NASA Astrophysics Data System (ADS)
Mizuno, N.; Ide, S.
2017-12-01
We develop a new location method for deep tectonic tremors, as an improvement of widely used envelope correlation method, and applied it to construct a tremor catalog in western Japan. Using the cross-correlation functions as objective functions and weighting components of data by the inverse of error variances, the envelope cross-correlation method is redefined as a maximum likelihood method. This method is also capable of multiple source detection, because when several events occur almost simultaneously, they appear as local maxima of likelihood.The average of weighted cross-correlation functions, defined as ACC, is a nonlinear function whose variable is a position of deep tectonic tremor. The optimization method has two steps. First, we fix the source depth to 30 km and use a grid search with 0.2 degree intervals to find the maxima of ACC, which are candidate event locations. Then, using each of the candidate locations as initial values, we apply a gradient method to determine horizontal and vertical components of a hypocenter. Sometimes, several source locations are determined in a time window of 5 minutes. We estimate the resolution, which is defined as a distance of sources to be detected separately by the location method, is about 100 km. The validity of this estimation is confirmed by a numerical test using synthetic waveforms. Applying to continuous seismograms in western Japan for over 10 years, the new method detected 27% more tremors than a previous method, owing to the multiple detection and improvement of accuracy by appropriate weighting scheme.
An FBG acoustic emission source locating system based on PHAT and GA
NASA Astrophysics Data System (ADS)
Shen, Jing-shi; Zeng, Xiao-dong; Li, Wei; Jiang, Ming-shun
2017-09-01
Using the acoustic emission locating technology to monitor the health of the structure is important for ensuring the continuous and healthy operation of the complex engineering structures and large mechanical equipment. In this paper, four fiber Bragg grating (FBG) sensors are used to establish the sensor array to locate the acoustic emission source. Firstly, the nonlinear locating equations are established based on the principle of acoustic emission, and the solution of these equations is transformed into an optimization problem. Secondly, time difference extraction algorithm based on the phase transform (PHAT) weighted generalized cross correlation provides the necessary conditions for the accurate localization. Finally, the genetic algorithm (GA) is used to solve the optimization model. In this paper, twenty points are tested in the marble plate surface, and the results show that the absolute locating error is within the range of 10 mm, which proves the accuracy of this locating method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, D. D.; Yver Kwok, C.; Cameron-Smith, P.
Emission rates of greenhouse gases (GHGs) entering into the atmosphere can be inferred using mathematical inverse approaches that combine observations from a network of stations with forward atmospheric transport models. Some locations for collecting observations are better than others for constraining GHG emissions through the inversion, but the best locations for the inversion may be inaccessible or limited by economic and other non-scientific factors. We present a method to design an optimal GHG observing network in the presence of multiple objectives that may be in conflict with each other. As a demonstration, we use our method to design a prototypemore » network of six stations to monitor summertime emissions in California of the potent GHG 1,1,1,2-tetrafluoroethane (CH 2FCF 3, HFC-134a). We use a multiobjective genetic algorithm to evolve network configurations that seek to jointly maximize the scientific accuracy of the inferred HFC-134a emissions and minimize the associated costs of making the measurements. The genetic algorithm effectively determines a set of "optimal" observing networks for HFC-134a that satisfy both objectives (i.e., the Pareto frontier). The Pareto frontier is convex, and clearly shows the tradeoffs between performance and cost, and the diminishing returns in trading one for the other. Without difficulty, our method can be extended to design optimal networks to monitor two or more GHGs with different emissions patterns, or to incorporate other objectives and constraints that are important in the practical design of atmospheric monitoring networks.« less
Access to specialist care: Optimizing the geographic configuration of trauma systems
Jansen, Jan O.; Morrison, Jonathan J.; Wang, Handing; He, Shan; Lawrenson, Robin; Hutchison, James D.; Campbell, Marion K.
2015-01-01
BACKGROUND The optimal geographic configuration of health care systems is key to maximizing accessibility while promoting the efficient use of resources. This article reports the use of a novel approach to inform the optimal configuration of a national trauma system. METHODS This is a prospective cohort study of all trauma patients, 15 years and older, attended to by the Scottish Ambulance Service, between July 1, 2013, and June 30, 2014. Patients underwent notional triage to one of three levels of care (major trauma center [MTC], trauma unit, or local emergency hospital). We used geographic information systems software to calculate access times, by road and air, from all incident locations to all candidate hospitals. We then modeled the performance of all mathematically possible network configurations and used multiobjective optimization to determine geospatially optimized configurations. RESULTS A total of 80,391 casualties were included. A network with only high- or moderate-volume MTCs (admitting at least 650 or 400 severely injured patients per year, respectively) would be optimally configured with a single MTC. A network accepting lower-volume MTCs (at least 240 severely injured patients per year) would be optimally configured with two MTCs. Both configurations would necessitate an increase in the number of helicopter retrievals. CONCLUSION This study has shown that a novel combination of notional triage, network analysis, and mathematical optimization can be used to inform the planning of a national clinical network. Scotland’s trauma system could be optimized with one or two MTCs. LEVEL OF EVIDENCE Care management study, level IV. PMID:26335775
NASA Astrophysics Data System (ADS)
Reynolds, A. M.
2008-04-01
A random Lévy-looping model of searching is devised and optimal random Lévy-looping searching strategies are identified for the location of a single target whose position is uncertain. An inverse-square power law distribution of loop lengths is shown to be optimal when the distance between the centre of the search and the target is much shorter than the size of the longest possible loop in the searching pattern. Optimal random Lévy-looping searching patterns have recently been observed in the flight patterns of honeybees (Apis mellifera) when attempting to locate their hive and when searching after a known food source becomes depleted. It is suggested that the searching patterns of desert ants (Cataglyphis) are consistent with the adoption of an optimal Lévy-looping searching strategy.
Exploring chemical reaction mechanisms through harmonic Fourier beads path optimization.
Khavrutskii, Ilja V; Smith, Jason B; Wallqvist, Anders
2013-10-28
Here, we apply the harmonic Fourier beads (HFB) path optimization method to study chemical reactions involving covalent bond breaking and forming on quantum mechanical (QM) and hybrid QM∕molecular mechanical (QM∕MM) potential energy surfaces. To improve efficiency of the path optimization on such computationally demanding potentials, we combined HFB with conjugate gradient (CG) optimization. The combined CG-HFB method was used to study two biologically relevant reactions, namely, L- to D-alanine amino acid inversion and alcohol acylation by amides. The optimized paths revealed several unexpected reaction steps in the gas phase. For example, on the B3LYP∕6-31G(d,p) potential, we found that alanine inversion proceeded via previously unknown intermediates, 2-iminopropane-1,1-diol and 3-amino-3-methyloxiran-2-ol. The CG-HFB method accurately located transition states, aiding in the interpretation of complex reaction mechanisms. Thus, on the B3LYP∕6-31G(d,p) potential, the gas phase activation barriers for the inversion and acylation reactions were 50.5 and 39.9 kcal∕mol, respectively. These barriers determine the spontaneous loss of amino acid chirality and cleavage of peptide bonds in proteins. We conclude that the combined CG-HFB method further advances QM and QM∕MM studies of reaction mechanisms.
Performance of Optimized Actuator and Sensor Arrays in an Active Noise Control System
NASA Technical Reports Server (NTRS)
Palumbo, D. L.; Padula, S. L.; Lyle, K. H.; Cline, J. H.; Cabell, R. H.
1996-01-01
Experiments have been conducted in NASA Langley's Acoustics and Dynamics Laboratory to determine the effectiveness of optimized actuator/sensor architectures and controller algorithms for active control of harmonic interior noise. Tests were conducted in a large scale fuselage model - a composite cylinder which simulates a commuter class aircraft fuselage with three sections of trim panel and a floor. Using an optimization technique based on the component transfer functions, combinations of 4 out of 8 piezoceramic actuators and 8 out of 462 microphone locations were evaluated against predicted performance. A combinatorial optimization technique called tabu search was employed to select the optimum transducer arrays. Three test frequencies represent the cases of a strong acoustic and strong structural response, a weak acoustic and strong structural response and a strong acoustic and weak structural response. Noise reduction was obtained using a Time Averaged/Gradient Descent (TAGD) controller. Results indicate that the optimization technique successfully predicted best and worst case performance. An enhancement of the TAGD control algorithm was also evaluated. The principal components of the actuator/sensor transfer functions were used in the PC-TAGD controller. The principal components are shown to be independent of each other while providing control as effective as the standard TAGD.
Remote sensing of coal mine pollution in the upper Potomac River basin
NASA Technical Reports Server (NTRS)
1974-01-01
A survey of remote sensing data pertinent to locating and monitoring sources of pollution resulting from surface and shaft mining operations was conducted in order to determine the various methods by which ERTS and aircraft remote sensing data can be used as a replacement for, or a supplement to traditional methods of monitoring coal mine pollution of the upper Potomac Basin. The gathering and analysis of representative samples of the raw and processed data obtained during the survey are described, along with plans to demonstrate and optimize the data collection processes.
Due-Window Assignment Scheduling with Variable Job Processing Times
Wu, Yu-Bin
2015-01-01
We consider a common due-window assignment scheduling problem jobs with variable job processing times on a single machine, where the processing time of a job is a function of its position in a sequence (i.e., learning effect) or its starting time (i.e., deteriorating effect). The problem is to determine the optimal due-windows, and the processing sequence simultaneously to minimize a cost function includes earliness, tardiness, the window location, window size, and weighted number of tardy jobs. We prove that the problem can be solved in polynomial time. PMID:25918745
Berthonnaud, E.; Hilmi, R.; Dimnet, J.
2012-01-01
The goal of this paper is to access to pelvis position and morphology in standing posture and to determine the relative locations of their articular surfaces. This is obtained from coupling biplanar radiography and bone modeling. The technique involves different successive steps. Punctual landmarks are first reconstructed, in space, from their projected images, identified on two orthogonal standing X-rays. Geometric models, of global pelvis and articular surfaces, are determined from punctual landmarks. The global pelvis is represented as a triangle of summits: the two femoral head centers and the sacral plateau center. The two acetabular cavities are modeled as hemispheres. The anterior sacral plateau edge is represented by an hemi-ellipsis. The modeled articular surfaces are projected on each X-ray. Their optimal location is obtained when the projected contours of their models best fit real outlines identified from landmark images. Linear and angular parameters characterizing the position of global pelvis and articular surfaces are calculated from the corresponding sets of axis. Relative positions of sacral plateau, and acetabular cavities, are then calculated. Two hundred standing pelvis, of subjects and scoliotic patients, have been studied. Examples are presented. They focus upon pelvis orientations, relative positions of articular surfaces, and pelvis asymmetries. PMID:22567279
Magnetic susceptibility and dielectric properties of peat in Central Kalimantan, Indonesia
NASA Astrophysics Data System (ADS)
Budi, Pranitha Septiana; Zulaikah, Siti; Hidayat, Arif; Azzahro, Rosyida
2017-07-01
Peatlands dominate almost all regions of Borneo, yet its utilization has not been developed optimally. Any information in this field could be obtained using soil magnetization methods by determining the magnetic succeptibility in terms of magnetic susceptibility value that could describe the source and type of magnetic minerals which could describe the source and type of magnetic minerals. Moreover, the dielectric properties of peat soil were also investigated to determine the level of water content by using the dielectric constant value. Samples was taken at six different locations along Pulang pisau to Berengbengkel. Magnetic susceptibility mass value at these locations ranged between -0.0009 - 0.712 (×10-6 m3/kg). Based on the average magnetic susceptibility value, samples that were taken from T1, T3 and T5 belonged to the type of paramagnetic mineral, while samples which were taken from T2, T4 and T6 belonged to the group of diamagnetic mineral. The low value of magnetic susceptibility of peat was probably derived from the pedogenic process. The average value of peat soil in six locations has a large dielectric constant value that is 28.2 which indicated that there was considerable moisture content due to the hydrophilic nature of peatland which means that the ability of peat in water binding is considerably high.
Scheduling policies of intelligent sensors and sensor/actuators in flexible structures
NASA Astrophysics Data System (ADS)
Demetriou, Michael A.; Potami, Raffaele
2006-03-01
In this note, we revisit the problem of actuator/sensor placement in large civil infrastructures and flexible space structures within the context of spatial robustness. The positioning of these devices becomes more important in systems employing wireless sensor and actuator networks (WSAN) for improved control performance and for rapid failure detection. The ability of the sensing and actuating devices to possess the property of spatial robustness results in reduced control energy and therefore the spatial distribution of disturbances is integrated into the location optimization measures. In our studies, the structure under consideration is a flexible plate clamped at all sides. First, we consider the case of sensor placement and the optimization scheme attempts to produce those locations that minimize the effects of the spatial distribution of disturbances on the state estimation error; thus the sensor locations produce state estimators with minimized disturbance-to-error transfer function norms. A two-stage optimization procedure is employed whereby one first considers the open loop system and the spatial distribution of disturbances is found that produces the maximal effects on the entire open loop state. Once this "worst" spatial distribution of disturbances is found, the optimization scheme subsequently finds the locations that produce state estimators with minimum transfer function norms. In the second part, we consider the collocated actuator/sensor pairs and the optimization scheme produces those locations that result in compensators with the smallest norms of the disturbance-to-state transfer functions. Going a step further, an intelligent control scheme is presented which, at each time interval, activates a subset of the actuator/sensor pairs in order provide robustness against spatiotemporally moving disturbances and minimize power consumption by keeping some sensor/actuators in sleep mode.
Optimization of a hydrometric network extension using specific flow, kriging and simulated annealing
NASA Astrophysics Data System (ADS)
Chebbi, Afef; Kebaili Bargaoui, Zoubeida; Abid, Nesrine; da Conceição Cunha, Maria
2017-12-01
In hydrometric stations, water levels are continuously observed and discharge rating curves are constantly updated to achieve accurate river levels and discharge observations. An adequate spatial distribution of hydrological gauging stations presents a lot of interest in linkage with the river regime characterization, water infrastructures design, water resources management and ecological survey. Due to the increase of riverside population and the associated flood risk, hydrological networks constantly need to be developed. This paper suggests taking advantage of kriging approaches to improve the design of a hydrometric network. The context deals with the application of an optimization approach using ordinary kriging and simulated annealing (SA) in order to identify the best locations to install new hydrometric gauges. The task at hand is to extend an existing hydrometric network in order to estimate, at ungauged sites, the average specific annual discharge which is a key basin descriptor. This methodology is developed for the hydrometric network of the transboundary Medjerda River in the North of Tunisia. A Geographic Information System (GIS) is adopted to delineate basin limits and centroids. The latter are adopted to assign the location of basins in kriging development. Scenarios where the size of an existing 12 stations network is alternatively increased by 1, 2, 3, 4 and 5 new station(s) are investigated using geo-regression and minimization of the variance of kriging errors. The analysis of the optimized locations from a scenario to another shows a perfect conformity with respect to the location of the new sites. The new locations insure a better spatial coverage of the study area as seen with the increase of both the average and the maximum of inter-station distances after optimization. The optimization procedure selects the basins that insure the shifting of the mean drainage area towards higher specific discharges.
Accounting for tourism benefits in marine reserve design.
Viana, Daniel F; Halpern, Benjamin S; Gaines, Steven D
2017-01-01
Marine reserve design often considers potential benefits to conservation and/or fisheries but typically ignores potential revenues generated through tourism. Since tourism can be the main source of economic benefits for many marine reserves worldwide, ignoring tourism objectives in the design process might lead to sub-optimal outcomes. To incorporate tourism benefits into marine reserve design, we develop a bioeconomic model that tracks tourism and fisheries revenues through time for different management options and location characteristics. Results from the model show that accounting for tourism benefits will ultimately motivate greater ocean protection. Our findings demonstrate that marine reserves are part of the optimal economic solution even in situations with optimal fisheries management and low tourism value relative to fisheries. The extent of optimal protection depends on specific location characteristics, such as tourism potential and other local amenities, and the species recreational divers care about. Additionally, as tourism value increases, optimal reserve area also increases. Finally, we demonstrate how tradeoffs between the two services depend on location attributes and management of the fishery outside marine reserve borders. Understanding when unavoidable tradeoffs will arise helps identify those situations where communities must choose between competing interests.
Accounting for tourism benefits in marine reserve design
2017-01-01
Marine reserve design often considers potential benefits to conservation and/or fisheries but typically ignores potential revenues generated through tourism. Since tourism can be the main source of economic benefits for many marine reserves worldwide, ignoring tourism objectives in the design process might lead to sub-optimal outcomes. To incorporate tourism benefits into marine reserve design, we develop a bioeconomic model that tracks tourism and fisheries revenues through time for different management options and location characteristics. Results from the model show that accounting for tourism benefits will ultimately motivate greater ocean protection. Our findings demonstrate that marine reserves are part of the optimal economic solution even in situations with optimal fisheries management and low tourism value relative to fisheries. The extent of optimal protection depends on specific location characteristics, such as tourism potential and other local amenities, and the species recreational divers care about. Additionally, as tourism value increases, optimal reserve area also increases. Finally, we demonstrate how tradeoffs between the two services depend on location attributes and management of the fishery outside marine reserve borders. Understanding when unavoidable tradeoffs will arise helps identify those situations where communities must choose between competing interests. PMID:29267364
Image segmentation using local shape and gray-level appearance models
NASA Astrophysics Data System (ADS)
Seghers, Dieter; Loeckx, Dirk; Maes, Frederik; Suetens, Paul
2006-03-01
A new generic model-based segmentation scheme is presented, which can be trained from examples akin to the Active Shape Model (ASM) approach in order to acquire knowledge about the shape to be segmented and about the gray-level appearance of the object in the image. Because in the ASM approach the intensity and shape models are typically applied alternately during optimizing as first an optimal target location is selected for each landmark separately based on local gray-level appearance information only to which the shape model is fitted subsequently, the ASM may be misled in case of wrongly selected landmark locations. Instead, the proposed approach optimizes for shape and intensity characteristics simultaneously. Local gray-level appearance information at the landmark points extracted from feature images is used to automatically detect a number of plausible candidate locations for each landmark. The shape information is described by multiple landmark-specific statistical models that capture local dependencies between adjacent landmarks on the shape. The shape and intensity models are combined in a single cost function that is optimized non-iteratively using dynamic programming which allows to find the optimal landmark positions using combined shape and intensity information, without the need for initialization.
The Effect of Aerodynamic Evaluators on the Multi-Objective Optimization of Flatback Airfoils
NASA Astrophysics Data System (ADS)
Miller, M.; Slew, K. Lee; Matida, E.
2016-09-01
With the long lengths of today's wind turbine rotor blades, there is a need to reduce the mass, thereby requiring stiffer airfoils, while maintaining the aerodynamic efficiency of the airfoils, particularly in the inboard region of the blade where structural demands are highest. Using a genetic algorithm, the multi-objective aero-structural optimization of 30% thick flatback airfoils was systematically performed for a variety of aerodynamic evaluators such as lift-to-drag ratio (Cl/Cd), torque (Ct), and torque-to-thrust ratio (Ct/Cn) to determine their influence on airfoil shape and performance. The airfoil optimized for Ct possessed a 4.8% thick trailing-edge, and a rather blunt leading-edge region which creates high levels of lift and correspondingly, drag. It's ability to maintain similar levels of lift and drag under forced transition conditions proved it's insensitivity to roughness. The airfoil optimized for Cl/Cd displayed relatively poor insensitivity to roughness due to the rather aft-located free transition points. The Ct/Cn optimized airfoil was found to have a very similar shape to that of the Cl/Cd airfoil, with a slightly more blunt leading-edge which aided in providing higher levels of lift and moderate insensitivity to roughness. The influence of the chosen aerodynamic evaluator under the specified conditions and constraints in the optimization of wind turbine airfoils is shown to have a direct impact on the airfoil shape and performance.
Conrad, Erin C; Mossner, James M; Chou, Kelvin L; Patil, Parag G
2018-05-23
Deep brain stimulation (DBS) of the subthalamic nucleus (STN) improves motor symptoms of Parkinson disease (PD). However, motor outcomes can be variable, perhaps due to inconsistent positioning of the active contact relative to an unknown optimal locus of stimulation. Here, we determine the optimal locus of STN stimulation in a geometrically unconstrained, mathematically precise, and atlas-independent manner, using Unified Parkinson Disease Rating Scale (UPDRS) motor outcomes and an electrophysiological neuronal stimulation model. In 20 patients with PD, we mapped motor improvement to active electrode location, relative to the individual, directly MRI-visualized STN. Our analysis included a novel, unconstrained and computational electrical-field model of neuronal activation to estimate the optimal locus of DBS. We mapped the optimal locus to a tightly defined ovoid region 0.49 mm lateral, 0.88 mm posterior, and 2.63 mm dorsal to the anatomical midpoint of the STN. On average, this locus is 11.75 lateral, 1.84 mm posterior, and 1.08 mm ventral to the mid-commissural point. Our novel, atlas-independent method reveals a single, ovoid optimal locus of stimulation in STN DBS for PD. The methodology, here applied to UPDRS and PD, is generalizable to atlas-independent mapping of other motor and non-motor effects of DBS. © 2018 S. Karger AG, Basel.
SU-F-J-06: Optimized Patient Inclusion for NaF PET Response-Based Biopsies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roth, A; Harmon, S; Perk, T
Purpose: A method to guide mid-treatment biopsies using quantitative [F-18]NaF PET/CT response is being investigated in a clinical trial. This study aims to develop methodology to identify patients amenable to mid-treatment biopsy based on pre-treatment imaging characteristics. Methods: 35 metastatic prostate cancer patients had NaF PET/CT scans taken prior to the start of treatment and 9–12 weeks into treatment. For mid-treatment biopsy targeting, lesions must be at least 1.5 cm{sup 3} and located in a clinically feasible region (lumbar/sacral spine, pelvis, humerus, or femur). Three methods were developed based on number of lesions present prior to treatment: a feasibility-restricted method,more » a location-restricted method, and an unrestricted method. The feasibility restricted method only utilizes information from lesions meeting biopsy requirements in the pre-treatment scan. The unrestricted method accounts for all lesions present in the pre-treatment scan. For each method, optimized classification cutoffs for candidate patients were determined. Results: 13 of the 35 patients had enough lesions at the mid-treatment for biopsy candidacy. Of 1749 lesions identified in all 35 patients at mid-treatment, only 9.8% were amenable to biopsy. Optimizing the feasibility-restricted method required 4 lesions at pre-treatment meeting volume and region requirements for biopsy, resulting patient identification sensitivity of 0.8 and specificity of 0.7. Of 6 false positive patients, only one patient lacked lesions for biopsy. Restricting for location alone showed poor results (sensitivity 0.2 and specificity 0.3). The optimized unrestricted method required patients have at least 37 lesions in pretreatment scan, resulting in a sensitivity of 0.8 and specificity of 0.8. There were 5 false positives, only one lacked lesions for biopsy. Conclusion: Incorporating the overall pre-treatment number of NaF PET/CT identified lesions provided best prediction for identifying candidate patients for mid-treatment biopsy. This study provides validity for prediction-based inclusion criteria that can be extended to various clinical trial scenarios. Funded by Prostate Cancer Foundation.« less
NASA Astrophysics Data System (ADS)
Zuhdi, Shaifudin; Saputro, Dewi Retno Sari
2017-03-01
GWOLR model used for represent relationship between dependent variable has categories and scale of category is ordinal with independent variable influenced the geographical location of the observation site. Parameters estimation of GWOLR model use maximum likelihood provide system of nonlinear equations and hard to be found the result in analytic resolution. By finishing it, it means determine the maximum completion, this thing associated with optimizing problem. The completion nonlinear system of equations optimize use numerical approximation, which one is Newton Raphson method. The purpose of this research is to make iteration algorithm Newton Raphson and program using R software to estimate GWOLR model. Based on the research obtained that program in R can be used to estimate the parameters of GWOLR model by forming a syntax program with command "while".
A Spherical Active Coded Aperture for 4π Gamma-ray Imaging
Hellfeld, Daniel; Barton, Paul; Gunter, Donald; ...
2017-09-22
Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Jun -Sang; Ray, Atish K.; Dawson, Paul R.
A shrink-fit sample is manufactured with a Ti-8Al-1Mo-1V alloy to introduce a multiaxial residual stress field in the disk of the sample. A set of strain and orientation pole figures are measured at various locations across the disk using synchrotron high-energy X-ray diffraction. Two approaches—the traditional sin 2Ψ method and the bi-scale optimization method—are taken to determine the stresses in the disk based on the measured strain and orientation pole figures, to explore the range of solutions that are possible for the stress field within the disk. While the stress components computed using the sin 2Ψ method and the bi-scalemore » optimization method have similar trends, their magnitudes are significantly different. Lastly, it is suspected that the local texture variation in the material is the cause of this discrepancy.« less
Stockpiling Ventilators for Influenza Pandemics.
Huang, Hsin-Chan; Araz, Ozgur M; Morton, David P; Johnson, Gregory P; Damien, Paul; Clements, Bruce; Meyers, Lauren Ancel
2017-06-01
In preparing for influenza pandemics, public health agencies stockpile critical medical resources. Determining appropriate quantities and locations for such resources can be challenging, given the considerable uncertainty in the timing and severity of future pandemics. We introduce a method for optimizing stockpiles of mechanical ventilators, which are critical for treating hospitalized influenza patients in respiratory failure. As a case study, we consider the US state of Texas during mild, moderate, and severe pandemics. Optimal allocations prioritize local over central storage, even though the latter can be deployed adaptively, on the basis of real-time needs. This prioritization stems from high geographic correlations and the slightly lower treatment success assumed for centrally stockpiled ventilators. We developed our model and analysis in collaboration with academic researchers and a state public health agency and incorporated it into a Web-based decision-support tool for pandemic preparedness and response.
Ultrasound transducer positioning aid for fetal heart rate monitoring.
Hamelmann, Paul; Kolen, Alex; Schmitt, Lars; Vullings, Rik; van Assen, Hans; Mischi, Massimo; Demi, Libertario; van Laar, Judith; Bergmans, Jan
2016-08-01
Fetal heart rate (fHR) monitoring is usually performed by Doppler ultrasound (US) techniques. For reliable fHR measurements it is required that the fetal heart is located within the US beam. In clinical practice, clinicians palpate the maternal abdomen to identify the fetal presentation and then the US transducer is fixated on the maternal abdomen where the best fHR signal can be obtained. Finding the optimal transducer position is done by listening to the strength of the Doppler audio output and relying on a signal quality indicator of the cardiotocographic (CTG) measurement system. Due to displacement of the US transducer or displacement of the fetal heart out of the US beam, the fHR signal may be lost. Therefore, it is often necessary that the obstetrician repeats the tedious procedure of US transducer positioning to avoid long periods of fHR signal loss. An intuitive US transducer positioning aid would be highly desirable to increase the work flow for the clinical staff. In this paper, the possibility to determine the fetal heart location with respect to the transducer by exploiting the received signal power in the transducer elements is shown. A commercially available US transducer used for fHR monitoring is connected to an US open platform, which allows individual driving of the elements and raw US data acquisition. Based on the power of the received Doppler signals in the transducer elements, the fetal heart location can be estimated. A beating fetal heart setup was designed and realized for validation. The experimental results show the feasibility of estimating the fetal heart location with the proposed method. This can be used to support clinicians in finding the optimal transducer position for fHR monitoring more easily.
Memory Transformation Enhances Reinforcement Learning in Dynamic Environments.
Santoro, Adam; Frankland, Paul W; Richards, Blake A
2016-11-30
Over the course of systems consolidation, there is a switch from a reliance on detailed episodic memories to generalized schematic memories. This switch is sometimes referred to as "memory transformation." Here we demonstrate a previously unappreciated benefit of memory transformation, namely, its ability to enhance reinforcement learning in a dynamic environment. We developed a neural network that is trained to find rewards in a foraging task where reward locations are continuously changing. The network can use memories for specific locations (episodic memories) and statistical patterns of locations (schematic memories) to guide its search. We find that switching from an episodic to a schematic strategy over time leads to enhanced performance due to the tendency for the reward location to be highly correlated with itself in the short-term, but regress to a stable distribution in the long-term. We also show that the statistics of the environment determine the optimal utilization of both types of memory. Our work recasts the theoretical question of why memory transformation occurs, shifting the focus from the avoidance of memory interference toward the enhancement of reinforcement learning across multiple timescales. As time passes, memories transform from a highly detailed state to a more gist-like state, in a process called "memory transformation." Theories of memory transformation speak to its advantages in terms of reducing memory interference, increasing memory robustness, and building models of the environment. However, the role of memory transformation from the perspective of an agent that continuously acts and receives reward in its environment is not well explored. In this work, we demonstrate a view of memory transformation that defines it as a way of optimizing behavior across multiple timescales. Copyright © 2016 the authors 0270-6474/16/3612228-15$15.00/0.
NASA Astrophysics Data System (ADS)
Ouillon, G.; Ducorbier, C.; Sornette, D.
2008-01-01
We propose a new pattern recognition method that is able to reconstruct the three-dimensional structure of the active part of a fault network using the spatial location of earthquakes. The method is a generalization of the so-called dynamic clustering (or k means) method, that partitions a set of data points into clusters, using a global minimization criterion of the variance of the hypocenters locations about their center of mass. The new method improves on the original k means method by taking into account the full spatial covariance tensor of each cluster in order to partition the data set into fault-like, anisotropic clusters. Given a catalog of seismic events, the output is the optimal set of plane segments that fits the spatial structure of the data. Each plane segment is fully characterized by its location, size, and orientation. The main tunable parameter is the accuracy of the earthquake locations, which fixes the resolution, i.e., the residual variance of the fit. The resolution determines the number of fault segments needed to describe the earthquake catalog: the better the resolution, the finer the structure of the reconstructed fault segments. The algorithm successfully reconstructs the fault segments of synthetic earthquake catalogs. Applied to the real catalog constituted of a subset of the aftershock sequence of the 28 June 1992 Landers earthquake in southern California, the reconstructed plane segments fully agree with faults already known on geological maps or with blind faults that appear quite obvious in longer-term catalogs. Future improvements of the method are discussed, as well as its potential use in the multiscale study of the inner structure of fault zones.
Ugarte, Juan P.; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John
2014-01-01
There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping. PMID:25489858
Mack, Elizabeth A; Tong, Daoqin; Credit, Kevin
2017-10-16
Food access is a global issue, and for this reason, a wealth of studies are dedicated to understanding the location of food deserts and the benefits of urban gardens. However, few studies have linked these two strands of research together to analyze whether urban gardening activity may be a step forward in addressing issues of access for food desert residents. The Phoenix, Arizona metropolitan area is used as a case to demonstrate the utility of spatial optimization models for siting urban gardens near food deserts and on vacant land. The locations of urban gardens are derived from a list obtained from the Maricopa County Cooperative Extension office at the University of Arizona which were geo located and aggregated to Census tracts. Census tracts were then assigned to one of three categories: tracts that contain a garden, tracts that are immediately adjacent to a tract with a garden, and all other non-garden/non-adjacent census tracts. Analysis of variance is first used to ascertain whether there are statistical differences in the demographic, socio-economic, and land use profiles of these three categories of tracts. A maximal covering spatial optimization model is then used to identify potential locations for future gardening activities. A constraint of these models is that gardens be located on vacant land, which is a growing problem in rapidly urbanizing environments worldwide. The spatial analysis of garden locations reveals that they are centrally located in tracts with good food access. Thus, the current distribution of gardens does not provide an alternative food source to occupants of food deserts. The maximal covering spatial optimization model reveals that gardens could be sited in alternative locations to better serve food desert residents. In fact, 53 gardens may be located to cover 96.4% of all food deserts. This is an improvement over the current distribution of gardens where 68 active garden sites provide coverage to a scant 8.4% of food desert residents. People in rapidly urbanizing environments around the globe suffer from poor food access. Rapid rates of urbanization also present an unused vacant land problem in cities around the globe. This paper highlights how spatial optimization models can be used to improve healthy food access for food desert residents, which is a critical first step in ameliorating the health problems associated with lack of healthy food access including heart disease and obesity.
Optimizing interplanetary trajectories with deep space maneuvers. M.S. Thesis
NASA Technical Reports Server (NTRS)
Navagh, John
1993-01-01
Analysis of interplanetary trajectories is a crucial area for both manned and unmanned missions of the Space Exploration Initiative. A deep space maneuver (DSM) can improve a trajectory in much the same way as a planetary swingby. However, instead of using a gravitational field to alter the trajectory, the on-board propulsion system of the spacecraft is used when the vehicle is not near a planet. The purpose is to develop an algorithm to determine where and when to use deep space maneuvers to reduce the cost of a trajectory. The approach taken to solve this problem uses primer vector theory in combination with a non-linear optimizing program to minimize Delta(V). A set of necessary conditions on the primer vector is shown to indicate whether a deep space maneuver will be beneficial. Deep space maneuvers are applied to a round trip mission to Mars to determine their effect on the launch opportunities. Other studies which were performed include cycler trajectories and Mars mission abort scenarios. It was found that the software developed was able to locate quickly DSM's which lower the total Delta(V) on these trajectories.
Optimizing interplanetary trajectories with deep space maneuvers
NASA Astrophysics Data System (ADS)
Navagh, John
1993-09-01
Analysis of interplanetary trajectories is a crucial area for both manned and unmanned missions of the Space Exploration Initiative. A deep space maneuver (DSM) can improve a trajectory in much the same way as a planetary swingby. However, instead of using a gravitational field to alter the trajectory, the on-board propulsion system of the spacecraft is used when the vehicle is not near a planet. The purpose is to develop an algorithm to determine where and when to use deep space maneuvers to reduce the cost of a trajectory. The approach taken to solve this problem uses primer vector theory in combination with a non-linear optimizing program to minimize Delta(V). A set of necessary conditions on the primer vector is shown to indicate whether a deep space maneuver will be beneficial. Deep space maneuvers are applied to a round trip mission to Mars to determine their effect on the launch opportunities. Other studies which were performed include cycler trajectories and Mars mission abort scenarios. It was found that the software developed was able to locate quickly DSM's which lower the total Delta(V) on these trajectories.
Risk-Based, Hypothesis-Driven Framework for Hydrological Field Campaigns with Case Studies
NASA Astrophysics Data System (ADS)
Harken, B.; Rubin, Y.
2014-12-01
There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration or plume travel time. These predictions often have significant bearing on a decision that must be made. Examples include: how to allocate limited remediation resources between contaminated groundwater sites or where to place a waste repository site. Answering such questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in EPM predictions stems from uncertainty in model parameters, which can be reduced by measurements taken in field campaigns. The costly nature of field measurements motivates a rational basis for determining a measurement strategy that is optimal with respect to the uncertainty in the EPM prediction. The tool of hypothesis testing allows this uncertainty to be quantified by computing the significance of the test resulting from a proposed field campaign. The significance of the test gives a rational basis for determining the optimality of a proposed field campaign. This hypothesis testing framework is demonstrated and discussed using various synthetic case studies. This study involves contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a specified location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical amount of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. The optimality of different field campaigns is assessed by computing the significance of the test resulting from each one. Evaluating the level of significance caused by a field campaign involves steps including likelihood-based inverse modeling and semi-analytical conditional particle tracking.
Optimal reactive planning with security constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, W.R.; Cheng, D.T.Y.; Dixon, A.M.
1995-12-31
The National Grid Company (NGC) of England and Wales has developed a computer program, SCORPION, to help system planners optimize the location and size of new reactive compensation plant on the transmission system. The reactive power requirements of the NGC system have risen as a result of increased power flows and the shorter timescale on which power stations are commissioned and withdrawn from service. In view of the high costs involved, it is important that reactive compensation be installed as economically as possible, without compromising security. Traditional methods based on iterative use of a load flow program are labor intensivemore » and subjective. SCORPION determines a near-optimal pattern of new reactive sources which are required to satisfy voltage constraints for normal and contingent states of operation of the transmission system. The algorithm processes the system states sequentially, instead of optimizing all of them simultaneously. This allows a large number of system states to be considered with an acceptable run time and computer memory requirement. Installed reactive sources are treated as continuous, rather than discrete, variables. However, the program has a restart facility which enables the user to add realistically sized reactive sources explicitly and thereby work towards a realizable solution to the planning problem.« less
NASA Astrophysics Data System (ADS)
Goris, N.; Elbern, H.
2015-12-01
Measurements of the large-dimensional chemical state of the atmosphere provide only sparse snapshots of the state of the system due to their typically insufficient temporal and spatial density. In order to optimize the measurement configurations despite those limitations, the present work describes the identification of sensitive states of the chemical system as optimal target areas for adaptive observations. For this purpose, the technique of singular vector analysis (SVA), which has proven effective for targeted observations in numerical weather prediction, is implemented in the EURAD-IM (EURopean Air pollution and Dispersion - Inverse Model) chemical transport model, yielding the EURAD-IM-SVA v1.0. Besides initial values, emissions are investigated as critical simulation controlling targeting variables. For both variants, singular vectors are applied to determine the optimal placement for observations and moreover to quantify which chemical compounds have to be observed with preference. Based on measurements of the airship based ZEPTER-2 campaign, the EURAD-IM-SVA v1.0 has been evaluated by conducting a comprehensive set of model runs involving different initial states and simulation lengths. For the sake of brevity, we concentrate our attention on the following chemical compounds, O3, NO, NO2, HCHO, CO, HONO, and OH, and focus on their influence on selected O3 profiles. Our analysis shows that the optimal placement for observations of chemical species is not entirely determined by mere transport and mixing processes. Rather, a combination of initial chemical concentrations, chemical conversions, and meteorological processes determines the influence of chemical compounds and regions. We furthermore demonstrate that the optimal placement of observations of emission strengths is highly dependent on the location of emission sources and that the benefit of including emissions as target variables outperforms the value of initial value optimization with growing simulation length. The obtained results confirm the benefit of considering both initial values and emission strengths as target variables and of applying the EURAD-IM-SVA v1.0 for measurement decision guidance with respect to chemical compounds.
2014-01-01
Background The optimal cutoff of the waist-to-hip ratio (WHR) among Han adults in Xinjiang, which is located in the center of Asia, is unknown. We aimed to examine the relationship between different WHRs and cardiovascular risk factors among Han adults in Xinjiang, and determine the optimal cutoff of the WHR. Methods The Cardiovascular Risk Survey was conducted from October 2007 to March 2010. A total of 14618 representative participants were selected using a four-stage stratified sampling method. A total of 5757 Han participants were included in the study. The present statistical analysis was restricted to the 5595 Han subjects who had complete anthropometric data. The sensitivity, specificity, and distance on the receiver operating characteristic (ROC) curve in each WHR level were calculated. The shortest distance in the ROC curves was used to determine the optimal cutoff of the WHR for detecting cardiovascular risk factors. Results In women, the WHR was positively associated with systolic blood pressure, diastolic blood pressure, and serum concentrations of serum total cholesterol. The prevalence of hypertension and hypertriglyceridemia increased as the WHR increased. The same results were not observed among men. The optimal WHR cutoffs for predicting hypertension, diabetes, dyslipidemia and ≥ two of these risk factors for Han adults in Xinjiang were 0.92, 0.92, 0.91, 0.92 in men and 0.88, 0.89, 0.88, 0.89 in women, respectively. Conclusions Higher cutoffs for the WHR are required in the identification of Han adults aged ≥ 35 years with a high risk of cardiovascular diseases in Xinjiang. PMID:25074400
Metaheuristic simulation optimisation for the stochastic multi-retailer supply chain
NASA Astrophysics Data System (ADS)
Omar, Marina; Mustaffa, Noorfa Haszlinna H.; Othman, Siti Norsyahida
2013-04-01
Supply Chain Management (SCM) is an important activity in all producing facilities and in many organizations to enable vendors, manufacturers and suppliers to interact gainfully and plan optimally their flow of goods and services. A simulation optimization approach has been widely used in research nowadays on finding the best solution for decision-making process in Supply Chain Management (SCM) that generally faced a complexity with large sources of uncertainty and various decision factors. Metahueristic method is the most popular simulation optimization approach. However, very few researches have applied this approach in optimizing the simulation model for supply chains. Thus, this paper interested in evaluating the performance of metahueristic method for stochastic supply chains in determining the best flexible inventory replenishment parameters that minimize the total operating cost. The simulation optimization model is proposed based on the Bees algorithm (BA) which has been widely applied in engineering application such as training neural networks for pattern recognition. BA is a new member of meta-heuristics. BA tries to model natural behavior of honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new algorithms for solving optimization problems. This model considers an outbound centralised distribution system consisting of one supplier and 3 identical retailers and is assumed to be independent and identically distributed with unlimited supply capacity at supplier.
Bus Stops Location and Bus Route Planning Using Mean Shift Clustering and Ant Colony in West Jakarta
NASA Astrophysics Data System (ADS)
Supangat, Kenny; Eko Soelistio, Yustinus
2017-03-01
Traffic Jam has been a daily problem for people in Jakarta which is one of the busiest city in Indonesia up until now. Even though the official government has tried to reduce the impact of traffic issues by developing a new public transportation which takes up a lot of resources and time, it failed to diminish the problem. The actual concern to this problem actually lies in how people move between places in Jakarta where they always using their own vehicle like cars, and motorcycles that fill most of the street in Jakarta. Among much other public transportations that roams the street of Jakarta, Buses is believed to be an efficient transportation that can move many people at once. However, the location of the bus stop is now have moved to the middle of the main road, and its too far for the nearby residence to access to it. This paper proposes an optimal location of optimal bus stops in West Jakarta that is experimentally proven to have a maximal distance of 350 m. The optimal location is estimated by means of mean shift clustering method while the optimal routes are calculated using Ant Colony algorithm. The bus stops locations rate of error is 0.07% with overall route area of 32 km. Based on our experiments, we believe our proposed bus stop plan can be an interesting alternative to reduce traffic congestion in West Jakarta.
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Chua, Michael E; Gatchalian, Glenn T; Corsino, Michael Vincent; Reyes, Buenaventura B
2012-10-01
(1) To determine the best cut-off level of Hounsfield units (HU) in the CT stonogram that would predict the appearance of a urinary calculi in plain KUB X-ray; (2) to estimate the sensitivity and specificity of the best cut-off HU; and (3) to determine whether stone size and location affect the in vivo predictability. A prospective cross-sectional study of patients aged 18-85 diagnosed with urolithiases on CT stonogram with concurrent plain KUB radiograph was conducted. Appearance of stones was recorded, and significant difference between radiolucent and radio-opaque CT attenuation level was determined using ANOVA. Receiver operating characteristics (ROC) curve determined the best HU cut-off value. Stone size and location were used for factor variability analysis. A total of 184 cases were included in this study, and the average urolithiasis size on CT stonogram was 0.84 cm (0.3-4.9 cm). On KUB X-ray, 34.2 % of the urolithiases were radiolucent and 65.8 % were radio-opaque. Mean value of CT Hounsfield unit for radiolucent stones was 358.25 (±156), and that for radio-opaque stones was 816.51 (±274). ROC curve determined the best cut-off value of HU at 498.5, with the sensitivity of 89.3 % and specificity of 87.3 %. For >4 mm stones, the sensitivity was 91.3 % and the specificity was 81.8 %. On the other hand, for =<4 mm stones, the sensitivity was 60 % and the specificity was 89.5 %. Based on the constructed ROC curve, a threshold value of 498.5 HU in CT stonogram was established as cut-off in determining whether a calculus is radio-opaque or radiolucent. The determined overall sensitivity and specificity of the set cut-off HU value are optimal. Stone size but not location affects the sensitivity and specificity.
Stationkeeping of Lissajous Trajectories in the Earth-Moon System with Applications to ARTEMIS
NASA Technical Reports Server (NTRS)
Folta, D. C.; Pavlak, T. A.; Howell, K. C.; Woodard, M. A.; Woodfork, D. W.
2010-01-01
In the last few decades, several missions have successfully exploited trajectories near the.Sun-Earth L1 and L2 libration points. Recently, the collinear libration points in the Earth-Moon system have emerged as locations with immediate application. Most libration point orbits, in any system, are inherently unstable. and must be controlled. To this end, several stationkeeping strategies are considered for application to ARTEMIS. Two approaches are examined to investigate the stationkeeping problem in this regime and the specific options. available for ARTEMIS given the mission and vehicle constraints. (I) A baseline orbit-targeting approach controls the vehicle to remain near a nominal trajectory; a related global optimum search method searches all possible maneuver angles to determine an optimal angle and magnitude; and (2) an orbit continuation method, with various formulations determines maneuver locations and minimizes costs. Initial results indicate that consistent stationkeeping costs can be achieved with both approaches and the costs are reasonable. These methods are then applied to Lissajous trajectories representing a baseline ARTEMIS libration orbit trajectory.
A temporal and spatial analysis of anthropogenic noise sources affecting SNMR
NASA Astrophysics Data System (ADS)
Dalgaard, E.; Christiansen, P.; Larsen, J. J.; Auken, E.
2014-11-01
One of the biggest challenges when using the surface nuclear magnetic resonance (SNMR) method in urban areas is a relatively low signal level compared to a high level of background noise. To understand the temporal and spatial behavior of anthropogenic noise sources like powerlines and electric fences, we have developed a multichannel instrument, noiseCollector (nC), which measures the full noise spectrum up to 10 kHz. Combined with advanced signal processing we can interpret the noise as seen by a SNMR instrument and also obtain insight into the more fundamental behavior of the noise. To obtain a specified acceptable noise level for a SNMR sounding the stack size can be determined by quantifying the different noise sources. Two common noise sources, electromagnetic fields stemming from powerlines and fences are analyzed and show a 1/r2 dependency in agreement with theoretical relations. A typical noise map, obtained with the nC instrument prior to a SNMR field campaign, clearly shows the location of noise sources, and thus we can efficiently determine the optimal location for the SNMR sounding from a noise perspective.
The complete proof on the optimal ordering policy under cash discount and trade credit
NASA Astrophysics Data System (ADS)
Chung, Kun-Jen
2010-04-01
Huang ((2005), 'Buyer's Optimal Ordering Policy and Payment Policy under Supplier Credit', International Journal of Systems Science, 36, 801-807) investigates the buyer's optimal ordering policy and payment policy under supplier credit. His inventory model is correct and interesting. Basically, he uses an algebraic method to locate the optimal solution of the annual total relevant cost TRC(T) and ignores the role of the functional behaviour of TRC(T) in locating the optimal solution of it. However, as argued in this article, Huang needs to explore the functional behaviour of TRC(T) to justify his solution. So, from the viewpoint of logic, the proof about Theorem 1 in Huang has some shortcomings such that the validity of Theorem 1 in Huang is questionable. The main purpose of this article is to remove and correct those shortcomings in Huang and present the complete proofs for Huang.
Optomechanical study and optimization of cantilever plate dynamics
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
1995-06-01
Optimum dynamic characteristics of an aluminum cantilever plate containing holes of different sizes and located at arbitrary positions on the plate are studied computationally and experimentally. The objective function of this optimization is the minimization/maximization of the natural frequencies of the plate in terms of such design variable s as the sizes and locations of the holes. The optimization process is performed using the finite element method and mathematical programming techniques in order to obtain the natural frequencies and the optimum conditions of the plate, respectively. The modal behavior of the resultant optimal plate layout is studied experimentally through the use of holographic interferometry techniques. Comparisons of the computational and experimental results show that good agreement between theory and test is obtained. The comparisons also show that the combined, or hybrid use of experimental and computational techniques complement each other and prove to be a very efficient tool for performing optimization studies of mechanical components.
NASA Astrophysics Data System (ADS)
Supian, Sudradjat; Wahyuni, Sri; Nahar, Julita; Subiyanto
2018-01-01
In this paper, traveling time workers from the central post office Bandung in delivering the package to the destination location was optimized by using Hungarian method. Sensitivity analysis against data changes that may occur was also conducted. The sampled data in this study are 10 workers who will be assigned to deliver mail package to 10 post office delivery centers in Bandung that is Cikutra, Padalarang, Ujung Berung, Dayeuh Kolot, Asia- Africa, Soreang, Situ Saeur, Cimahi, Cipedes and Cikeruh. The result of this research is optimal traveling time from 10 workers to 10 destination locations. The optimal traveling time required by the workers is 387 minutes to reach the destination. Based on this result, manager of the central post office Bandung can make optimal decisions to assign tasks to their workers.
Mohamed, Ahmed F; Elarini, Mahdi M; Othman, Ahmed M
2014-05-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt.
Mohamed, Ahmed F.; Elarini, Mahdi M.; Othman, Ahmed M.
2013-01-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt. PMID:25685507
Optimal linear reconstruction of dark matter from halo catalogues
Cai, Yan -Chuan; Bernstein, Gary; Sheth, Ravi K.
2011-04-01
The dark matter lumps (or "halos") that contain galaxies have locations in the Universe that are to some extent random with respect to the overall matter distributions. We investigate how best to estimate the total matter distribution from the locations of the halos. We derive the weight function w(M) to apply to dark-matter haloes that minimizes the stochasticity between the weighted halo distribution and its underlying mass density field. The optimal w(M) depends on the range of masses of halos being used. While the standard biased-Poisson model of the halo distribution predicts that bias weighting is optimal, the simple factmore » that the mass is comprised of haloes implies that the optimal w(M) will be a mixture of mass-weighting and bias-weighting. In N-body simulations, the Poisson estimator is up to 15× noisier than the optimal. Optimal weighting could make cosmological tests based on the matter power spectrum or cross-correlations much more powerful and/or cost effective.« less
Optimal placement of FACTS devices using optimization techniques: A review
NASA Astrophysics Data System (ADS)
Gaur, Dipesh; Mathew, Lini
2018-03-01
Modern power system is dealt with overloading problem especially transmission network which works on their maximum limit. Today’s power system network tends to become unstable and prone to collapse due to disturbances. Flexible AC Transmission system (FACTS) provides solution to problems like line overloading, voltage stability, losses, power flow etc. FACTS can play important role in improving static and dynamic performance of power system. FACTS devices need high initial investment. Therefore, FACTS location, type and their rating are vital and should be optimized to place in the network for maximum benefit. In this paper, different optimization methods like Particle Swarm Optimization (PSO), Genetic Algorithm (GA) etc. are discussed and compared for optimal location, type and rating of devices. FACTS devices such as Thyristor Controlled Series Compensator (TCSC), Static Var Compensator (SVC) and Static Synchronous Compensator (STATCOM) are considered here. Mentioned FACTS controllers effects on different IEEE bus network parameters like generation cost, active power loss, voltage stability etc. have been analyzed and compared among the devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Li, Yan; Zhang, Yingchen
In this paper, a big data-based approach is proposed for the security improvement of an unplanned microgrid islanding (UMI). The proposed approach contains two major steps: the first step is big data analysis of wide-area monitoring to detect a UMI and locate it; the second step is particle swarm optimization (PSO)-based stability enhancement for the UMI. First, an optimal synchrophasor measurement device selection (OSMDS) and matching pursuit decomposition (MPD)-based spatial-temporal analysis approach is proposed to significantly reduce the volume of data while keeping appropriate information from the synchrophasor measurements. Second, a random forest-based ensemble learning approach is trained to detectmore » the UMI. When combined with grid topology, the UMI can be located. Then the stability problem of the UMI is formulated as an optimization problem and the PSO is used to find the optimal operational parameters of the UMI. An eigenvalue-based multiobjective function is proposed, which aims to improve the damping and dynamic characteristics of the UMI. Finally, the simulation results demonstrate the effectiveness and robustness of the proposed approach.« less
Structure solution of DNA-binding proteins and complexes with ARCIMBOLDO libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pröpper, Kevin; Instituto de Biologia Molecular de Barcelona; Meindl, Kathrin
2014-06-01
The structure solution of DNA-binding protein structures and complexes based on the combination of location of DNA-binding protein motif fragments with density modification in a multi-solution frame is described. Protein–DNA interactions play a major role in all aspects of genetic activity within an organism, such as transcription, packaging, rearrangement, replication and repair. The molecular detail of protein–DNA interactions can be best visualized through crystallography, and structures emphasizing insight into the principles of binding and base-sequence recognition are essential to understanding the subtleties of the underlying mechanisms. An increasing number of high-quality DNA-binding protein structure determinations have been witnessed despite themore » fact that the crystallographic particularities of nucleic acids tend to pose specific challenges to methods primarily developed for proteins. Crystallographic structure solution of protein–DNA complexes therefore remains a challenging area that is in need of optimized experimental and computational methods. The potential of the structure-solution program ARCIMBOLDO for the solution of protein–DNA complexes has therefore been assessed. The method is based on the combination of locating small, very accurate fragments using the program Phaser and density modification with the program SHELXE. Whereas for typical proteins main-chain α-helices provide the ideal, almost ubiquitous, small fragments to start searches, in the case of DNA complexes the binding motifs and DNA double helix constitute suitable search fragments. The aim of this work is to provide an effective library of search fragments as well as to determine the optimal ARCIMBOLDO strategy for the solution of this class of structures.« less
Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong
2014-01-01
Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.
Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach
Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan
2017-01-01
In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case. PMID:28609471
Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach.
Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan
2017-01-01
In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case.
Ant colony optimization for solving university facility layout problem
NASA Astrophysics Data System (ADS)
Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin
2013-04-01
Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).
Optimal Sensor Location Design for Reliable Fault Detection in Presence of False Alarms
Yang, Fan; Xiao, Deyun; Shah, Sirish L.
2009-01-01
To improve fault detection reliability, sensor location should be designed according to an optimization criterion with constraints imposed by issues of detectability and identifiability. Reliability requires the minimization of undetectability and false alarm probability due to random factors on sensor readings, which is not only related with sensor readings but also affected by fault propagation. This paper introduces the reliability criteria expression based on the missed/false alarm probability of each sensor and system topology or connectivity derived from the directed graph. The algorithm for the optimization problem is presented as a heuristic procedure. Finally, a boiler system is illustrated using the proposed method. PMID:22291524
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Zhenhong; Liu, Changzheng; Yin, Yafeng
The objective of this study is to evaluate the opportunity for public charging for a subset of US cities by using available public parking lot data. The capacity of the parking lots weighted by the daily parking occupancy rate is used as a proxy for daily parking demand. The city s public charging opportunity is defined as the percentage of parking demand covered by chargers on the off-street parking network. We assess this opportunity under the scenario of optimal deployment of public chargers. We use the maximum coverage model to optimally locate those facilities on the public garage network. Wemore » compare the optimal results to the actual placement of chargers. These empirical findings are of great interest to policymakers as those showcase the potential of increasing opportunities for charging under optimal charging location planning.« less
Modeling marine surface microplastic transport to assess optimal removal locations
NASA Astrophysics Data System (ADS)
Sherman, Peter; van Sebille, Erik
2016-01-01
Marine plastic pollution is an ever-increasing problem that demands immediate mitigation and reduction plans. Here, a model based on satellite-tracked buoy observations and scaled to a large data set of observations on microplastic from surface trawls was used to simulate the transport of plastics floating on the ocean surface from 2015 to 2025, with the goal to assess the optimal marine microplastic removal locations for two scenarios: removing the most surface microplastic and reducing the impact on ecosystems, using plankton growth as a proxy. The simulations show that the optimal removal locations are primarily located off the coast of China and in the Indonesian Archipelago for both scenarios. Our estimates show that 31% of the modeled microplastic mass can be removed by 2025 using 29 plastic collectors operating at a 45% capture efficiency from these locations, compared to only 17% when the 29 plastic collectors are moored in the North Pacific garbage patch, between Hawaii and California. The overlap of ocean surface microplastics and phytoplankton growth can be reduced by 46% at our proposed locations, while sinks in the North Pacific can only reduce the overlap by 14%. These results are an indication that oceanic plastic removal might be more effective in removing a greater microplastic mass and in reducing potential harm to marine life when closer to shore than inside the plastic accumulation zones in the centers of the gyres.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Min-Joo; Park, So-Hyun; Research Institute of Biomedical Engineering, The Catholic University of Korea, Seoul
2013-10-01
The partial-breast irradiation (PBI) technique, an alternative to whole-breast irradiation, is a beam delivery method that uses a limited range of treatment volume. The present study was designed to determine the optimal PBI treatment modalities for 8 different tumor locations. Treatment planning was performed on computed tomography (CT) data sets of 6 patients who had received lumpectomy treatments. Tumor locations were classified into 8 subsections according to breast quadrant and depth. Three-dimensional conformal radiation therapy (3D-CRT), electron beam therapy (ET), and helical tomotherapy (H-TOMO) were utilized to evaluate the dosimetric effect for each tumor location. Conformation number (CN), radical dosemore » homogeneity index (rDHI), and dose delivered to healthy tissue were estimated. The Kruskal-Wallis, Mann-Whitney U, and Bonferroni tests were used for statistical analysis. The ET approach showed good sparing effects and acceptable target coverage for the lower inner quadrant—superficial (LIQ-S) and lower inner quadrant—deep (LIQ-D) locations. The H-TOMO method was the least effective technique as no evaluation index achieved superiority for all tumor locations except CN. The ET method is advisable for treating LIQ-S and LIQ-D tumors, as opposed to 3D-CRT or H-TOMO, because of acceptable target coverage and much lower dose applied to surrounding tissue.« less
Visual and linguistic determinants of the eyes' initial fixation position in reading development.
Ducrot, Stéphanie; Pynte, Joël; Ghio, Alain; Lété, Bernard
2013-03-01
Two eye-movement experiments with one hundred and seven first- through fifth-grade children were conducted to examine the effects of visuomotor and linguistic factors on the recognition of words and pseudowords presented in central vision (using a variable-viewing-position technique) and in parafoveal vision (shifted to the left or right of a central fixation point). For all groups of children, we found a strong effect of stimulus location, in both central and parafoveal vision. This effect corresponds to the children's apparent tendency, for peripherally located targets, to reach a position located halfway between the middle and the left edge of the stimulus (preferred viewing location, PVL), whether saccading to the right or left. For centrally presented targets, refixation probability and lexical-decision time were the lowest near the word's center, suggesting an optimal viewing position (OVP). The viewing-position effects found here were modulated (1) by print exposure, both in central and parafoveal vision; and (2) by the intrinsic qualities of the stimulus (lexicality and word frequency) for targets in central vision but not for parafoveally presented targets. Copyright © 2013 Elsevier B.V. All rights reserved.
System for estimating fatigue damage
DOE Office of Scientific and Technical Information (OSTI.GOV)
LeMonds, Jeffrey; Guzzo, Judith Ann; Liu, Shaopeng
In one aspect, a system for estimating fatigue damage in a riser string is provided. The system includes a plurality of accelerometers which can be deployed along a riser string and a communications link to transmit accelerometer data from the plurality of accelerometers to one or more data processors in real time. With data from a limited number of accelerometers located at sensor locations, the system estimates an optimized current profile along the entire length of the riser including riser locations where no accelerometer is present. The optimized current profile is then used to estimate damage rates to individual risermore » components and to update a total accumulated damage to individual riser components. The number of sensor locations is small relative to the length of a deepwater riser string, and a riser string several miles long can be reliably monitored along its entire length by fewer than twenty sensor locations.« less
An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification
Samal, Ashok; Rong, Panying; Green, Jordan R.
2016-01-01
Purpose The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of words, and a set of short phrases during the recording. We used a machine-learning classifier (support-vector machine) to classify the speech stimuli on the basis of articulatory movements. We then compared classification accuracies of the flesh-point combinations to determine an optimal set of sensors. Results When data from the 4 sensors (T1: the vicinity between the tongue tip and tongue blade; T4: the tongue-body back; UL: the upper lip; and LL: the lower lip) were combined, phoneme and word classifications were most accurate and were comparable with the full set (including T2: the tongue-body front; and T3: the tongue-body front). Conclusion We identified a 4-sensor set—that is, T1, T4, UL, LL—that yielded a classification accuracy (91%–95%) equivalent to that using all 6 sensors. These findings provide an empirical basis for selecting sensors and their locations for scientific and emerging clinical applications that incorporate articulatory movements. PMID:26564030
Regional positioning using a low Earth orbit satellite constellation
NASA Astrophysics Data System (ADS)
Shtark, Tomer; Gurfil, Pini
2018-02-01
Global and regional satellite navigation systems are constellations orbiting the Earth and transmitting radio signals for determining position and velocity of users around the globe. The state-of-the-art navigation satellite systems are located in medium Earth orbits and geosynchronous Earth orbits and are characterized by high launching, building and maintenance costs. For applications that require only regional coverage, the continuous and global coverage that existing systems provide may be unnecessary. Thus, a nano-satellites-based regional navigation satellite system in Low Earth Orbit (LEO), with significantly reduced launching, building and maintenance costs, can be considered. Thus, this paper is aimed at developing a LEO constellation optimization and design method, using genetic algorithms and gradient-based optimization. The preliminary results of this study include 268 LEO constellations, aimed at regional navigation in an approximately 1000 km × 1000 km area centered at the geographic coordinates [30, 30] degrees. The constellations performance is examined using simulations, and the figures of merit include total coverage time, revisit time, and geometric dilution of precision (GDOP) percentiles. The GDOP is a quantity that determines the positioning solution accuracy and solely depends on the spatial geometry of the satellites. Whereas the optimization method takes into account only the Earth's second zonal harmonic coefficient, the simulations include the Earth's gravitational field with zonal and tesseral harmonics up to degree 10 and order 10, Solar radiation pressure, drag, and the lunisolar gravitational perturbation.
Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang
2017-04-26
This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.
Energy Technology Allocation for Distributed Energy Resources: A Technology-Policy Framework
NASA Astrophysics Data System (ADS)
Mallikarjun, Sreekanth
Distributed energy resources (DER) are emerging rapidly. New engineering technologies, materials, and designs improve the performance and extend the range of locations for DER. In contrast, constructing new or modernizing existing high voltage transmission lines for centralized generation are expensive and challenging. In addition, customer demand for reliability has increased and concerns about climate change have created a pull for swift renewable energy penetration. In this context, DER policy makers, developers, and users are interested in determining which energy technologies to use to accommodate different end-use energy demands. We present a two-stage multi-objective strategic technology-policy framework for determining the optimal energy technology allocation for DER. The framework simultaneously considers economic, technical, and environmental objectives. The first stage utilizes a Data Envelopment Analysis model for each end-use to evaluate the performance of each energy technology based on the three objectives. The second stage incorporates factor efficiencies determined in the first stage, capacity limitations, dispatchability, and renewable penetration for each technology, and demand for each end-use into a bottleneck multi-criteria decision model which provides the Pareto-optimal energy resource allocation. We conduct several case studies to understand the roles of various distributed energy technologies in different scenarios. We construct some policy implications based on the model results of set of case studies.
A tight upper bound for quadratic knapsack problems in grid-based wind farm layout optimization
NASA Astrophysics Data System (ADS)
Quan, Ning; Kim, Harrison M.
2018-03-01
The 0-1 quadratic knapsack problem (QKP) in wind farm layout optimization models possible turbine locations as nodes, and power loss due to wake effects between pairs of turbines as edges in a complete graph. The goal is to select up to a certain number of turbine locations such that the sum of selected node and edge coefficients is maximized. Finding the optimal solution to the QKP is difficult in general, but it is possible to obtain a tight upper bound on the QKP's optimal value which facilitates the use of heuristics to solve QKPs by giving a good estimate of the optimality gap of any feasible solution. This article applies an upper bound method that is especially well-suited to QKPs in wind farm layout optimization due to certain features of the formulation that reduce the computational complexity of calculating the upper bound. The usefulness of the upper bound was demonstrated by assessing the performance of the greedy algorithm for solving QKPs in wind farm layout optimization. The results show that the greedy algorithm produces good solutions within 4% of the optimal value for small to medium sized problems considered in this article.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-04-19
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-01-01
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs). However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. PMID:28422062
A Data Driven Pre-cooling Framework for Energy Cost Optimization in Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishwanath, Arun; Chandan, Vikas; Mendoza, Cameron
Commercial buildings consume significant amount of energy. Facility managers are increasingly grappling with the problem of reducing their buildings’ peak power, overall energy consumption and energy bills. In this paper, we first develop an optimization framework – based on a gray box model for zone thermal dynamics – to determine a pre-cooling strategy that simultaneously shifts the peak power to low energy tariff regimes, and reduces both the peak power and overall energy consumption by exploiting the flexibility in a building’s thermal comfort range. We then evaluate the efficacy of the pre-cooling optimization framework by applying it to building managementmore » system data, spanning several days, obtained from a large commercial building located in a tropical region of the world. The results from simulations show that optimal pre-cooling reduces peak power by over 50%, energy consumption by up to 30% and energy bills by up to 37%. Next, to enable ease of use of our framework, we also propose a shortest path based heuristic algorithmfor solving the optimization problemand show that it has comparable erformance with the optimal solution. Finally, we describe an application of the proposed optimization framework for developing countries to reduce the dependency on expensive fossil fuels, which are often used as a source for energy backup.We conclude by highlighting our real world deployment of the optimal pre-cooling framework via a software service on the cloud platform of a major provider. Our pre-cooling methodology, based on the gray box optimization framework, incurs no capital expense and relies on data readily available from a building management system, thus enabling facility managers to take informed decisions for improving the energy and cost footprints of their buildings« less
Stephanie A. Snyder; Jay H. Whitmore; Ingrid E. Schneider; Dennis R. Becker
2008-01-01
This paper presents a geographic information system (GIS)-based method for recreational trail location for all-terrain vehicles (ATVs) which considers environmental factors, as well as rider preferences for trail attributes. The method utilizes the Least-Cost Path algorithm within a GIS framework to optimize trail location. The trail location algorithm considered trail...
Ahn, Yongjun; Yeo, Hwasoo
2015-01-01
The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station’s density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles. PMID:26575845
NASA Astrophysics Data System (ADS)
Niakan, F.; Vahdani, B.; Mohammadi, M.
2015-12-01
This article proposes a multi-objective mixed-integer model to optimize the location of hubs within a hub network design problem under uncertainty. The considered objectives include minimizing the maximum accumulated travel time, minimizing the total costs including transportation, fuel consumption and greenhouse emissions costs, and finally maximizing the minimum service reliability. In the proposed model, it is assumed that for connecting two nodes, there are several types of arc in which their capacity, transportation mode, travel time, and transportation and construction costs are different. Moreover, in this model, determining the capacity of the hubs is part of the decision-making procedure and balancing requirements are imposed on the network. To solve the model, a hybrid solution approach is utilized based on inexact programming, interval-valued fuzzy programming and rough interval programming. Furthermore, a hybrid multi-objective metaheuristic algorithm, namely multi-objective invasive weed optimization (MOIWO), is developed for the given problem. Finally, various computational experiments are carried out to assess the proposed model and solution approaches.
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.
2018-06-01
The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.
Wing box transonic-flutter suppression using piezoelectric self-sensing actuators attached to skin
NASA Astrophysics Data System (ADS)
Otiefy, R. A. H.; Negm, H. M.
2010-12-01
The main objective of this research is to study the capability of piezoelectric (PZT) self-sensing actuators to suppress the transonic wing box flutter, which is a flow-structure interaction phenomenon. The unsteady general frequency modified transonic small disturbance (TSD) equation is used to model the transonic flow about the wing. The wing box structure and piezoelectric actuators are modeled using the equivalent plate method, which is based on the first order shear deformation plate theory (FSDPT). The piezoelectric actuators are bonded to the skin. The optimal electromechanical coupling conditions between the piezoelectric actuators and the wing are collected from previous work. Three main different control strategies, a linear quadratic Gaussian (LQG) which combines the linear quadratic regulator (LQR) with the Kalman filter estimator (KFE), an optimal static output feedback (SOF), and a classic feedback controller (CFC), are studied and compared. The optimum actuator and sensor locations are determined using the norm of feedback control gains (NFCG) and norm of Kalman filter estimator gains (NKFEG) respectively. A genetic algorithm (GA) optimization technique is used to calculate the controller and estimator parameters to achieve a target response.
McKinnon, Adam D; Ozanne-Smith, Joan; Pope, Rodney
2009-05-01
Injury prevention guided by robust injury surveillance systems (ISS's) can effectively reduce military injury rates, but ISS's depend on human interaction. This study examined experiences and requirements of key users of the Australian Defence Force (ADF) ISS to determine whether the operation of the ISS was optimal, whether there were any shortcomings, and if so, how these shortcomings might be addressed. Semistructured interviews were conducted with 18 Australian Defence Department participants located throughout Australia. Grounded theory methods were used to analyze data by developing an understanding of processes and social phenomena related to injury surveillance systems within the military context. Interviews were recorded and professionally transcribed and information contained in the transcripts was analyzed using NVivo. Key themes relating to the components of an injury surveillance system were identified from the analysis. A range of processes and sociocultural factors influence the utility of military ISS's. These are discussed in detail and should be considered in the future design and operation of military ISS's to facilitate optimal outcomes for injury prevention.
Hasanvand, Hamed; Mozafari, Babak; Arvan, Mohammad R; Amraee, Turaj
2015-11-01
This paper addresses the application of a static Var compensator (SVC) to improve the damping of interarea oscillations. Optimal location and size of SVC are defined using bifurcation and modal analysis to satisfy its primary application. Furthermore, the best-input signal for damping controller is selected using Hankel singular values and right half plane-zeros. The proposed approach is aimed to design a robust PI controller based on interval plants and Kharitonov's theorem. The objective here is to determine the stability region to attain robust stability, the desired phase margin, gain margin, and bandwidth. The intersection of the resulting stability regions yields the set of kp-ki parameters. In addition, optimal multiobjective design of PI controller using particle swarm optimization (PSO) algorithm is presented. The effectiveness of the suggested controllers in damping of local and interarea oscillation modes of a multimachine power system, over a wide range of loading conditions and system configurations, is confirmed through eigenvalue analysis and nonlinear time domain simulation. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Oreskovic, Nicolas M; Blossom, Jeff; Field, Alison E; Chiang, Sylvia R; Winickoff, Jonathan P; Kleinman, Ronald E
2012-05-01
National trends indicate that children and adolescents are not achieving sufficient levels of physical activity. Combining global positioning system (GPS) technology with accelerometers has the potential to provide an objective determination in locations where youth engage in physical activity. The aim of this study was to identify the optimal methods for collecting combined accelerometer and GPS data in youth, to best locate where children spend time and are physically active. A convenience sample of 24 mid-school children in Massachusetts was included. Accelerometers and GPS units were used to quantify and locate childhood physical activity over 5 weekdays and 2 weekend days. Accelerometer and GPS data were joined by time and mapped with a geographical information system (GIS) using ArcGIS software. Data were collected in winter, spring, summer in 2009-2010, collecting a total of 26,406 matched datapoints overall. Matched data yield was low (19.1% total), regardless of season (winter, 12.8%; spring, 30.1%; summer, 14.3%). Teacher-provided, pre-charged equipment yielded the most matched (30.1%; range: 10.1-52.3%) and greatest average days (6.1 days) of data. Across all seasons, children spent most of their time at home. Outdoor use patterns appeared to vary by season, with street use increasing in spring, and park and playground use increasing in summer. Children spent equal amounts of physical activity time at home and walking in the streets. Overall, the various methods for combining GPS and accelerometer data provided similarly low amounts of combined data. No combined GPS and accelerometer data collection method proved superior in every data return category, but use of GIS to map joined accelerometer and GPS data can demarcate childhood physical activity locations.
Kremen, Arie; Tsompanakis, Yiannis
2010-04-01
The slope-stability of a proposed vertical extension of a balefill was investigated in the present study, in an attempt to determine a geotechnically conservative design, compliant with New Jersey Department of Environmental Protection regulations, to maximize the utilization of unclaimed disposal capacity. Conventional geotechnical analytical methods are generally limited to well-defined failure modes, which may not occur in landfills or balefills due to the presence of preferential slip surfaces. In addition, these models assume an a priori stress distribution to solve essentially indeterminate problems. In this work, a different approach has been applied, which avoids several of the drawbacks of conventional methods. Specifically, the analysis was performed in a two-stage process: (a) calculation of stress distribution, and (b) application of an optimization technique to identify the most probable failure surface. The stress analysis was performed using a finite element formulation and the location of the failure surface was located by dynamic programming optimization method. A sensitivity analysis was performed to evaluate the effect of the various waste strength parameters of the underlying mathematical model on the results, namely the factor of safety of the landfill. Although this study focuses on the stability investigation of an expanded balefill, the methodology presented can easily be applied to general geotechnical investigations.
A Discrete Fruit Fly Optimization Algorithm for the Traveling Salesman Problem.
Jiang, Zi-Bin; Yang, Qiong
2016-01-01
The fruit fly optimization algorithm (FOA) is a newly developed bio-inspired algorithm. The continuous variant version of FOA has been proven to be a powerful evolutionary approach to determining the optima of a numerical function on a continuous definition domain. In this study, a discrete FOA (DFOA) is developed and applied to the traveling salesman problem (TSP), a common combinatorial problem. In the DFOA, the TSP tour is represented by an ordering of city indices, and the bio-inspired meta-heuristic search processes are executed with two elaborately designed main procedures: the smelling and tasting processes. In the smelling process, an effective crossover operator is used by the fruit fly group to search for the neighbors of the best-known swarm location. During the tasting process, an edge intersection elimination (EXE) operator is designed to improve the neighbors of the non-optimum food location in order to enhance the exploration performance of the DFOA. In addition, benchmark instances from the TSPLIB are classified in order to test the searching ability of the proposed algorithm. Furthermore, the effectiveness of the proposed DFOA is compared to that of other meta-heuristic algorithms. The results indicate that the proposed DFOA can be effectively used to solve TSPs, especially large-scale problems.
A Discrete Fruit Fly Optimization Algorithm for the Traveling Salesman Problem
Jiang, Zi-bin; Yang, Qiong
2016-01-01
The fruit fly optimization algorithm (FOA) is a newly developed bio-inspired algorithm. The continuous variant version of FOA has been proven to be a powerful evolutionary approach to determining the optima of a numerical function on a continuous definition domain. In this study, a discrete FOA (DFOA) is developed and applied to the traveling salesman problem (TSP), a common combinatorial problem. In the DFOA, the TSP tour is represented by an ordering of city indices, and the bio-inspired meta-heuristic search processes are executed with two elaborately designed main procedures: the smelling and tasting processes. In the smelling process, an effective crossover operator is used by the fruit fly group to search for the neighbors of the best-known swarm location. During the tasting process, an edge intersection elimination (EXE) operator is designed to improve the neighbors of the non-optimum food location in order to enhance the exploration performance of the DFOA. In addition, benchmark instances from the TSPLIB are classified in order to test the searching ability of the proposed algorithm. Furthermore, the effectiveness of the proposed DFOA is compared to that of other meta-heuristic algorithms. The results indicate that the proposed DFOA can be effectively used to solve TSPs, especially large-scale problems. PMID:27812175
NASA Astrophysics Data System (ADS)
Feng, X.; Sheng, Y.; Condon, A. J.; Paramygin, V. A.; Hall, T.
2012-12-01
A cost effective method, JPM-OS (Joint Probability Method with Optimal Sampling), for determining storm response and inundation return frequencies was developed and applied to quantify the hazard of hurricane storm surges and inundation along the Southwest FL,US coast (Condon and Sheng 2012). The JPM-OS uses piecewise multivariate regression splines coupled with dimension adaptive sparse grids to enable the generation of a base flood elevation (BFE) map. Storms are characterized by their landfall characteristics (pressure deficit, radius to maximum winds, forward speed, heading, and landfall location) and a sparse grid algorithm determines the optimal set of storm parameter combinations so that the inundation from any other storm parameter combination can be determined. The end result is a sample of a few hundred (197 for SW FL) optimal storms which are simulated using a dynamically coupled storm surge / wave modeling system CH3D-SSMS (Sheng et al. 2010). The limited historical climatology (1940 - 2009) is explored to develop probabilistic characterizations of the five storm parameters. The probability distributions are discretized and the inundation response of all parameter combinations is determined by the interpolation in five-dimensional space of the optimal storms. The surge response and the associated joint probability of the parameter combination is used to determine the flood elevation with a 1% annual probability of occurrence. The limited historical data constrains the accuracy of the PDFs of the hurricane characteristics, which in turn affect the accuracy of the BFE maps calculated. To offset the deficiency of limited historical dataset, this study presents a different method for producing coastal inundation maps. Instead of using the historical storm data, here we adopt 33,731 tracks that can represent the storm climatology in North Atlantic basin and SW Florida coasts. This large quantity of hurricane tracks is generated from a new statistical model which had been used for Western North Pacific (WNP) tropical cyclone (TC) genesis (Hall 2011) as well as North Atlantic tropical cyclone genesis (Hall and Jewson 2007). The introduction of these tracks complements the shortage of the historical samples and allows for more reliable PDFs required for implementation of JPM-OS. Using the 33,731 tracks and JPM-OS, an optimal storm ensemble is determined. This approach results in different storms/winds for storm surge and inundation modeling, and produces different Base Flood Elevation maps for coastal regions. Coastal inundation maps produced by the two different methods will be discussed in detail in the poster paper.
Estimation of brain network ictogenicity predicts outcome from epilepsy surgery
NASA Astrophysics Data System (ADS)
Goodfellow, M.; Rummel, C.; Abela, E.; Richardson, M. P.; Schindler, K.; Terry, J. R.
2016-07-01
Surgery is a valuable option for pharmacologically intractable epilepsy. However, significant post-operative improvements are not always attained. This is due in part to our incomplete understanding of the seizure generating (ictogenic) capabilities of brain networks. Here we introduce an in silico, model-based framework to study the effects of surgery within ictogenic brain networks. We find that factors conventionally determining the region of tissue to resect, such as the location of focal brain lesions or the presence of epileptiform rhythms, do not necessarily predict the best resection strategy. We validate our framework by analysing electrocorticogram (ECoG) recordings from patients who have undergone epilepsy surgery. We find that when post-operative outcome is good, model predictions for optimal strategies align better with the actual surgery undertaken than when post-operative outcome is poor. Crucially, this allows the prediction of optimal surgical strategies and the provision of quantitative prognoses for patients undergoing epilepsy surgery.
Chen, Yanxi; Niu, Zhiguang; Zhang, Hongwei
2013-06-01
Landscape lakes in the city suffer high eutrophication risk because of their special characters and functions in the water circulation system. Using a landscape lake HMLA located in Tianjin City, North China, with a mixture of point source (PS) pollution and non-point source (NPS) pollution, we explored the methodology of Fluent and AQUATOX to simulate and predict the state of HMLA, and trophic index was used to assess the eutrophication state. Then, we use water compensation optimization and three scenarios to determine the optimal management methodology. Three scenarios include ecological restoration scenario, best management practices (BMPs) scenario, and a scenario combining both. Our results suggest that the maintenance of a healthy ecosystem with ecoremediation is necessary and the BMPs have a far-reaching effect on water reusing and NPS pollution control. This study has implications for eutrophication control and management under development for urbanization in China.
Stockpiling Ventilators for Influenza Pandemics
Araz, Ozgur M.; Morton, David P.; Johnson, Gregory P.; Damien, Paul; Clements, Bruce; Meyers, Lauren Ancel
2017-01-01
In preparing for influenza pandemics, public health agencies stockpile critical medical resources. Determining appropriate quantities and locations for such resources can be challenging, given the considerable uncertainty in the timing and severity of future pandemics. We introduce a method for optimizing stockpiles of mechanical ventilators, which are critical for treating hospitalized influenza patients in respiratory failure. As a case study, we consider the US state of Texas during mild, moderate, and severe pandemics. Optimal allocations prioritize local over central storage, even though the latter can be deployed adaptively, on the basis of real-time needs. This prioritization stems from high geographic correlations and the slightly lower treatment success assumed for centrally stockpiled ventilators. We developed our model and analysis in collaboration with academic researchers and a state public health agency and incorporated it into a Web-based decision-support tool for pandemic preparedness and response. PMID:28518041
A non-destructive selection criterion for fibre content in jute : II. Regression approach.
Arunachalam, V; Iyer, R D
1974-01-01
An experiment with ten populations of jute, comprising varieties and mutants of the two species Corchorus olitorius and C.capsularis was conducted at two different locations with the object of evolving an effective criterion for selecting superior single plants for fibre yield. At Delhi, variation existed only between varieties as a group and mutants as a group, while at Pusa variation also existed among the mutant populations of C. capsularis.A multiple regression approach was used to find the optimum combination of characters for prediction of fibre yield. A process of successive elimination of characters based on the coefficient of determination provided by individual regression equations was employed to arrive at the optimal set of characters for predicting fibre yield. It was found that plant height, basal and mid-diameters and basal and mid-dry fibre weights would provide such an optimal set.
Optimizing spectral wave estimates with adjoint-based sensitivity maps
NASA Astrophysics Data System (ADS)
Orzech, Mark; Veeramony, Jay; Flampouris, Stylianos
2014-04-01
A discrete numerical adjoint has recently been developed for the stochastic wave model SWAN. In the present study, this adjoint code is used to construct spectral sensitivity maps for two nearshore domains. The maps display the correlations of spectral energy levels throughout the domain with the observed energy levels at a selected location or region of interest (LOI/ROI), providing a full spectrum of values at all locations in the domain. We investigate the effectiveness of sensitivity maps based on significant wave height ( H s ) in determining alternate offshore instrument deployment sites when a chosen nearshore location or region is inaccessible. Wave and bathymetry datasets are employed from one shallower, small-scale domain (Duck, NC) and one deeper, larger-scale domain (San Diego, CA). The effects of seasonal changes in wave climate, errors in bathymetry, and multiple assimilation points on sensitivity map shapes and model performance are investigated. Model accuracy is evaluated by comparing spectral statistics as well as with an RMS skill score, which estimates a mean model-data error across all spectral bins. Results indicate that data assimilation from identified high-sensitivity alternate locations consistently improves model performance at nearshore LOIs, while assimilation from low-sensitivity locations results in lesser or no improvement. Use of sub-sampled or alongshore-averaged bathymetry has a domain-specific effect on model performance when assimilating from a high-sensitivity alternate location. When multiple alternate assimilation locations are used from areas of lower sensitivity, model performance may be worse than with a single, high-sensitivity assimilation point.
Central Plant Optimization for Waste Energy Reduction (CPOWER). ESTCP Cost and Performance Report
2016-12-01
in the regression models. The solar radiation data did not appear reliable in the weather dataset for the location, and hence it was not used. The...and additional factors (e.g., solar insolation) may be needed to obtain a better model. 2. Inputs to optimizer: During several periods of...Location: North Carolina Energy Consumption Cost Savings $ 443,698.00 Analysis Type: FEMP PV of total savings 215,698.00$ Base Date: April 1
Optimal networks of future gravitational-wave telescopes
NASA Astrophysics Data System (ADS)
Raffai, Péter; Gondán, László; Heng, Ik Siong; Kelecsényi, Nándor; Logue, Josh; Márka, Zsuzsa; Márka, Szabolcs
2013-08-01
We aim to find the optimal site locations for a hypothetical network of 1-3 triangular gravitational-wave telescopes. We define the following N-telescope figures of merit (FoMs) and construct three corresponding metrics: (a) capability of reconstructing the signal polarization; (b) accuracy in source localization; and (c) accuracy in reconstructing the parameters of a standard binary source. We also define a combined metric that takes into account the three FoMs with practically equal weight. After constructing a geomap of possible telescope sites, we give the optimal 2-telescope networks for the four FoMs separately in example cases where the location of the first telescope has been predetermined. We found that based on the combined metric, placing the first telescope to Australia provides the most options for optimal site selection when extending the network with a second instrument. We suggest geographical regions where a potential second and third telescope could be placed to get optimal network performance in terms of our FoMs. Additionally, we use a similar approach to find the optimal location and orientation for the proposed LIGO-India detector within a five-detector network with Advanced LIGO (Hanford), Advanced LIGO (Livingston), Advanced Virgo, and KAGRA. We found that the FoMs do not change greatly in sites within India, though the network can suffer a significant loss in reconstructing signal polarizations if the orientation angle of an L-shaped LIGO-India is not set to the optimal value of ˜58.2°( + k × 90°) (measured counterclockwise from East to the bisector of the arms).
Optimization of wind plant layouts using an adjoint approach
King, Ryan N.; Dykes, Katherine; Graf, Peter; ...
2017-03-10
Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less
Optimization of wind plant layouts using an adjoint approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Ryan N.; Dykes, Katherine; Graf, Peter
Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less
NASA Astrophysics Data System (ADS)
Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.
2015-12-01
We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.
Dabrowski, Marcin; Cieplak, Maciej; Sharma, Piyush Sindhu; Borowicz, Pawel; Noworyta, Krzysztof; Lisowski, Wojciech; D'Souza, Francis; Kuhn, Alexander; Kutner, Wlodzimierz
2017-08-15
Nanostructured artificial receptor materials with unprecedented hierarchical structure for determination of human serum albumin (HSA) are designed and fabricated. For that purpose a new hierarchical template is prepared. This template allowed for simultaneous structural control of the deposited molecularly imprinted polymer (MIP) film on three length scales. A colloidal crystal templating with optimized electrochemical polymerization of 2,3'-bithiophene enables deposition of an MIP film in the form of an inverse opal. Thickness of the deposited polymer film is precisely controlled with the number of current oscillations during potentiostatic deposition of the imprinted poly(2,3'-bithiophene) film. Prior immobilization of HSA on the colloidal crystal allows formation of molecularly imprinted cavities exclusively on the internal surface of the pores. Furthermore, all binding sites are located on the surface of the imprinted cavities at locations corresponding to positions of functional groups present on the surface of HSA molecules due to prior derivatization of HSA molecules with appropriate functional monomers. This synergistic strategy results in a material with superior recognition performance. Integration of the MIP film as a recognition unit with a sensitive extended-gate field-effect transistor (EG-FET) transducer leads to highly selective HSA determination in the femtomolar concentration range. Copyright © 2017 Elsevier B.V. All rights reserved.
Automation and Optimization of Multipulse Laser Zona Drilling of Mouse Embryos During Embryo Biopsy.
Wong, Christopher Yee; Mills, James K
2017-03-01
Laser zona drilling (LZD) is a required step in many embryonic surgical procedures, for example, assisted hatching and preimplantation genetic diagnosis. LZD involves the ablation of the zona pellucida (ZP) using a laser while minimizing potentially harmful thermal effects on critical internal cell structures. Develop a method for the automation and optimization of multipulse LZD, applied to cleavage-stage embryos. A two-stage optimization is used. The first stage uses computer vision algorithms to identify embryonic structures and determines the optimal ablation zone farthest away from critical structures such as blastomeres. The second stage combines a genetic algorithm with a previously reported thermal analysis of LZD to optimize the combination of laser pulse locations and pulse durations. The goal is to minimize the peak temperature experienced by the blastomeres while creating the desired opening in the ZP. A proof of concept of the proposed LZD automation and optimization method is demonstrated through experiments on mouse embryos with positive results, as adequately sized openings are created. Automation of LZD is feasible and is a viable step toward the automation of embryo biopsy procedures. LZD is a common but delicate procedure performed by human operators using subjective methods to gauge proper LZD procedure. Automation of LZD removes human error to increase the success rate of LZD. Although the proposed methods are developed for cleavage-stage embryos, the same methods may be applied to most types LZD procedures, embryos at different developmental stages, or nonembryonic cells.
Improving the performance of surgery-based clinical pathways: a simulation-optimization approach.
Ozcan, Yasar A; Tànfani, Elena; Testi, Angela
2017-03-01
This paper aims to improve the performance of clinical processes using clinical pathways (CPs). The specific goal of this research is to develop a decision support tool, based on a simulation-optimization approach, which identify the proper adjustment and alignment of resources to achieve better performance for both the patients and the health-care facility. When multiple perspectives are present in a decision problem, critical issues arise and often require the balancing of goals. In our approach, meeting patients' clinical needs in a timely manner, and to avoid worsening of clinical conditions, we assess the level of appropriate resources. The simulation-optimization model seeks and evaluates alternative resource configurations aimed at balancing the two main objectives-meeting patient needs and optimal utilization of beds and operating rooms.Using primary data collected at a Department of Surgery of a public hospital located in Genoa, Italy. The simulation-optimization modelling approach in this study has been applied to evaluate the thyroid surgical treatment together with the other surgery-based CPs. The low rate of bed utilization and the long elective waiting lists of the specialty under study indicates that the wards were oversized while the operating room capacity was the bottleneck of the system. The model enables hospital managers determine which objective has to be given priority, as well as the corresponding opportunity costs.
Application of pattern recognition techniques to crime analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, C.F.; Cox, L.A. Jr.; Chappell, G.A.
1976-08-15
The initial goal was to evaluate the capabilities of current pattern recognition techniques when applied to existing computerized crime data. Performance was to be evaluated both in terms of the system's capability to predict crimes and to optimize police manpower allocation. A relation was sought to predict the crime's susceptibility to solution, based on knowledge of the crime type, location, time, etc. The preliminary results of this work are discussed. They indicate that automatic crime analysis involving pattern recognition techniques is feasible, and that efforts to determine optimum variables and techniques are warranted. 47 figures (RWR)
Optical trapping performance of dielectric-metallic patchy particles
Lawson, Joseph L.; Jenness, Nathan J.; Clark, Robert L.
2015-01-01
We demonstrate a series of simulation experiments examining the optical trapping behavior of composite micro-particles consisting of a small metallic patch on a spherical dielectric bead. A full parameter space of patch shapes, based on current state of the art manufacturing techniques, and optical properties of the metallic film stack is examined. Stable trapping locations and optical trap stiffness of these particles are determined based on the particle design and potential particle design optimizations are discussed. A final test is performed examining the ability to incorporate these composite particles with standard optical trap metrology technologies. PMID:26832054