Generalized local emission tomography
Katsevich, Alexander J.
1998-01-01
Emission tomography enables locations and values of internal isotope density distributions to be determined from radiation emitted from the whole object. In the method for locating the values of discontinuities, the intensities of radiation emitted from either the whole object or a region of the object containing the discontinuities are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the isotope density discontinuity. The asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) knowing pointwise values of the attenuation coefficient within the object. In the method for determining the location of the discontinuity, the intensities of radiation emitted from an object are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the density discontinuity and the location .GAMMA. of the attenuation coefficient discontinuity. Pointwise values of the attenuation coefficient within the object need not be known in this case.
Server-Side JavaScript Debugging: Viewing the Contents of an Object
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, J.; Simons, R.
1999-04-21
JavaScript allows the definition and use of large, complex objects. Unlike some other object-oriented languages, it also allows run-time modifications not only of the values of object components, but also of the very structure of the object itself. This feature is powerful and sometimes very convenient, but it can be difficult to keep track of the object's structure and values throughout program execution. What's needed is a simple way to view the current state of an object at any point during execution. There is a debug function that is included in the Netscape server-side JavaScript environment. The function outputs themore » value(s) of the expression given as the argument to the function in the JavaScript Application Manager's debug window [SSJS].« less
Graphical representation of robot grasping quality measures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varma, V.; Tasch, U.
1993-11-01
When an object is held by a multi-fingered hand, the values of the contact forces can be multivalued. An objective function, when used in conjunction with the frictional and geometric constraints of the grasp, can however, give a unique set of finger force values. The selection of the objective function in determining the finger forces is dependent on the type of grasp required, the material properties of the object, and the limitations of the robot fingers. In this paper several optimization functions are studied and their merits highlighted. A graphical representation of the finger force values and the objective functionmore » is introduced that enable one in selecting and comparing various grasping configurations. The impending motion of the object at different torque and finger force values are determined by observing the normalized coefficient of friction plots.« less
Accelerated iterative beam angle selection in IMRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bangert, Mark, E-mail: m.bangert@dkfz.de; Unkelbach, Jan
2016-03-15
Purpose: Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based onmore » surrogates for the FMO objective function value. Methods: We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Results: Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. Conclusions: We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.« less
Accelerated iterative beam angle selection in IMRT.
Bangert, Mark; Unkelbach, Jan
2016-03-01
Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n - 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based on surrogates for the FMO objective function value. We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.
Katsevich, Alexander J.; Ramm, Alexander G.
1996-01-01
Local tomography is enhanced to determine the location and value of a discontinuity between a first internal density of an object and a second density of a region within the object. A beam of radiation is directed in a predetermined pattern through the region of the object containing the discontinuity. Relative attenuation data of the beam is determined within the predetermined pattern having a first data component that includes attenuation data through the region. In a first method for evaluating the value of the discontinuity, the relative attenuation data is inputted to a local tomography function .function..sub..LAMBDA. to define the location S of the density discontinuity. The asymptotic behavior of .function..sub..LAMBDA. is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA.. In a second method for evaluating the value of the discontinuity, a gradient value for a mollified local tomography function .gradient..function..sub..LAMBDA..epsilon. (x.sub.ij) is determined along the discontinuity; and the value of the jump of the density across the discontinuity curve (or surface) S is estimated from the gradient values.
NASA Astrophysics Data System (ADS)
Aittokoski, Timo; Miettinen, Kaisa
2008-07-01
Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.
2014-12-26
additive value function, which assumes mutual preferential independence (Gregory S. Parnell, 2013). In other words, this method can be used if the... additive value function method to calculate the aggregate value of multiple objectives. Step 9 : Sensitivity Analysis Once the global values are...gravity metric, the additive method will be applied using equal weights for each axis value function. Pilot Satisfaction (Usability) As expressed
NASA Astrophysics Data System (ADS)
Kostyukov, V. N.; Naumenko, A. P.; Kudryavtseva, I. S.
2018-01-01
Improvement of distinguishing criteria, determining defects of machinery and mechanisms, by vibroacoustic signals is a recent problem for technical diagnostics. The work objective is assessment of instantaneous values by methods of statistical decision making theory and risk of regulatory values of characteristic function modulus. The modulus of the characteristic function is determined given a fixed parameter of the characteristic function. It is possible to determine the limits of the modulus, which correspond to different machine’s condition. The data of the modulus values are used as diagnostic features in the vibration diagnostics and monitoring systems. Using such static decision-making methods as: minimum number of wrong decisions, maximum likelihood, minimax, Neumann-Pearson characteristic function modulus limits are determined, separating conditions of a diagnosed object.
The fundamentals of average local variance--Part I: Detecting regular patterns.
Bøcher, Peder Klith; McCloy, Keith R
2006-02-01
The method of average local variance (ALV) computes the mean of the standard deviation values derived for a 3 x 3 moving window on a successively coarsened image to produce a function of ALV versus spatial resolution. In developing ALV, the authors used approximately a doubling of the pixel size at each coarsening of the image. They hypothesized that ALV is low when the pixel size is smaller than the size of scene objects because the pixels on the object will have similar response values. When the pixel and objects are of similar size, they will tend to vary in response and the ALV values will increase. As the size of pixels increase further, more objects will be contained in a single pixel and ALV will decrease. The authors showed that various cover types produced single peak ALV functions that inexplicitly peaked when the pixel size was 1/2 to 3/4 of the object size. This paper reports on work done to explore the characteristics of the various forms of the ALV function and to understand the location of the peaks that occur in this function. The work was conducted using synthetically generated image data. The investigation showed that the hypothesis originally proposed in is not adequate. A new hypothesis is proposed that the ALV function has peak locations that are related to the geometric size of pattern structures in the scene. These structures are not always the same as scene objects. Only in cases where the size of and separation between scene objects are equal does the ALV function detect the size of the objects. In situations where the distance between scene objects are larger than their size, the ALV function has a peak at the object separation, not at the object size. This work has also shown that multiple object structures of different sizes and distances in the image provide multiple peaks in the ALV function and that some of these structures are not implicitly recognized as such from our perspective. However, the magnitude of these peaks depends on the response mix in the structures, complicating their interpretation and analysis. The analysis of the ALV Function is, thus, more complex than that generally reported in the literature.
McGowan, Conor P.; Lyons, James E.; Smith, David
2015-01-01
Structured decision making (SDM) is an increasingly utilized approach and set of tools for addressing complex decisions in environmental management. SDM is a value-focused thinking approach that places paramount importance on first establishing clear management objectives that reflect core values of stakeholders. To be useful for management, objectives must be transparently stated in unambiguous and measurable terms. We used these concepts to develop consensus objectives for the multiple stakeholders of horseshoe crab harvest in Delaware Bay. Participating stakeholders first agreed on a qualitative statement of fundamental objectives, and then worked to convert those objectives to specific and measurable quantities, so that management decisions could be assessed. We used a constraint-based approach where the conservation objectives for Red Knots, a species of migratory shorebird that relies on horseshoe crab eggs as a food resource during migration, constrained the utility of crab harvest. Developing utility functions to effectively reflect the management objectives allowed us to incorporate stakeholder risk aversion even though different stakeholder groups were averse to different or competing risks. While measurable objectives and quantitative utility functions seem scientific, developing these objectives was fundamentally driven by the values of the participating stakeholders.
NASA Astrophysics Data System (ADS)
McGowan, Conor P.; Lyons, James E.; Smith, David R.
2015-04-01
Structured decision making (SDM) is an increasingly utilized approach and set of tools for addressing complex decisions in environmental management. SDM is a value-focused thinking approach that places paramount importance on first establishing clear management objectives that reflect core values of stakeholders. To be useful for management, objectives must be transparently stated in unambiguous and measurable terms. We used these concepts to develop consensus objectives for the multiple stakeholders of horseshoe crab harvest in Delaware Bay. Participating stakeholders first agreed on a qualitative statement of fundamental objectives, and then worked to convert those objectives to specific and measurable quantities, so that management decisions could be assessed. We used a constraint-based approach where the conservation objectives for Red Knots, a species of migratory shorebird that relies on horseshoe crab eggs as a food resource during migration, constrained the utility of crab harvest. Developing utility functions to effectively reflect the management objectives allowed us to incorporate stakeholder risk aversion even though different stakeholder groups were averse to different or competing risks. While measurable objectives and quantitative utility functions seem scientific, developing these objectives was fundamentally driven by the values of the participating stakeholders.
Implementing and Bounding a Cascade Heuristic for Large-Scale Optimization
2017-06-01
solving the monolith, we develop a method for producing lower bounds to the optimal objective function value. To do this, we solve a new integer...as developing and analyzing methods for producing lower bounds to the optimal objective function value of the seminal problem monolith, which this...length of the window decreases, the end effects of the model typically increase (Zerr, 2016). There are four primary methods for correcting end
Predicting objective function weights from patient anatomy in prostate IMRT treatment planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.
2013-12-15
Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less
Predicting objective function weights from patient anatomy in prostate IMRT treatment planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.
Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less
2010-01-01
The authors studied functional state before and after the working shift in workers at objects for chemical weapons destruction, analyzed changes in central and peripheral hemodynamics parameters, vegetative regulation of heart rhythm, stabilographic and psychophysiologic values.
NASA Astrophysics Data System (ADS)
Khalilpourazari, Soheyl; Khalilpourazary, Saman
2017-05-01
In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.
2013-03-01
comparison between two objectives at a time. The decision maker develops a micro - version of the value equation using only the two objectives that...variety of different functional areas. Table 10. New Alternatives Identified Alternative Source Base Recycling Services AFCEC Airfield Pavement Repair
A suggestion for computing objective function in model calibration
Wu, Yiping; Liu, Shuguang
2014-01-01
A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).
Modeling Limited Foresight in Water Management Systems
NASA Astrophysics Data System (ADS)
Howitt, R.
2005-12-01
The inability to forecast future water supplies means that their management inevitably occurs under situations of limited foresight. Three modeling problems arise, first what type of objective function is a manager with limited foresight optimizing? Second how can we measure these objectives? Third can objective functions that incorporate uncertainty be integrated within the structure of optimizing water management models? The paper reviews the concepts of relative risk aversion and intertemporal substitution that underlie stochastic dynamic preference functions. Some initial results from the estimation of such functions for four different dam operations in northern California are presented and discussed. It appears that the path of previous water decisions and states influences the decision-makers willingness to trade off water supplies between periods. A compromise modeling approach that incorporates carry-over value functions under limited foresight within a broader net work optimal water management model is developed. The approach uses annual carry-over value functions derived from small dimension stochastic dynamic programs embedded within a larger dimension water allocation network. The disaggregation of the carry-over value functions to the broader network is extended using the space rule concept. Initial results suggest that the solution of such annual nonlinear network optimizations is comparable to, or faster than, the solution of linear network problems over long time series.
NASA Astrophysics Data System (ADS)
Utama, D. N.; Ani, N.; Iqbal, M. M.
2018-03-01
Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.
Ocean feature recognition using genetic algorithms with fuzzy fitness functions (GA/F3)
NASA Technical Reports Server (NTRS)
Ankenbrandt, C. A.; Buckles, B. P.; Petry, F. E.; Lybanon, M.
1990-01-01
A model for genetic algorithms with semantic nets is derived for which the relationships between concepts is depicted as a semantic net. An organism represents the manner in which objects in a scene are attached to concepts in the net. Predicates between object pairs are continuous valued truth functions in the form of an inverse exponential function (e sub beta lxl). 1:n relationships are combined via the fuzzy OR (Max (...)). Finally, predicates between pairs of concepts are resolved by taking the average of the combined predicate values of the objects attached to the concept at the tail of the arc representing the predicate in the semantic net. The method is illustrated by applying it to the identification of oceanic features in the North Atlantic.
NASA Technical Reports Server (NTRS)
Hibbard, William L.; Dyer, Charles R.; Paul, Brian E.
1994-01-01
The VIS-AD data model integrates metadata about the precision of values, including missing data indicators and the way that arrays sample continuous functions, with the data objects of a scientific programming language. The data objects of this data model form a lattice, ordered by the precision with which they approximate mathematical objects. We define a similar lattice of displays and study visualization processes as functions from data lattices to display lattices. Such functions can be applied to visualize data objects of all data types and are thus polymorphic.
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.
1996-01-01
This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
NASA Astrophysics Data System (ADS)
Wang, D. G.; Sun, L.; Tan, Y. H.; Shi, A. Q.; Cheng, J.
2017-08-01
Taking the mangrove ecosystem of Ximen Island National Marine Specially Protected Areas as the research object, the ecological service value of the mangrove forest was evaluated and analyzed using a market value method, an ecological value method and a carbon tax method. The results showed that the ecosystem service value of the mangrove forest on Ximen Island is worth a total of 16,104,000 CNY/a. Among the value of individual ecosystem services, the direct value of material production function and leisure function reached 1,385,000 CNY/a, with a ratio of 8.6%. The indirect value of disturbance regulation, gas regulation, water purification, habitat function and culture research reached 14,719,000 CNY/a, with a ratio of 91.4%. Among the above sub-items, the proportion of disturbance regulation value, habitat function value and cultural research function value reached 78.8%, which reflects the important scientific value and ecological value of the Ximen Island mangrove ecosystem, especially its vital importance in providing a habitat for birds and playing a role in disaster prevention and mitigation.
Diagnosis and sensor validation through knowledge of structure and function
NASA Technical Reports Server (NTRS)
Scarl, Ethan A.; Jamieson, John R.; Delaune, Carl I.
1987-01-01
The liquid oxygen expert system 'LES' is proposed as the first capable of diagnostic reasoning from sensor data, using model-based knowledge of structure and function to find the expected state of all system objects, including sensors. The approach is generally algorithmic rather than heuristic, and represents uncertainties as sets of possibilities. Functional relationships are inverted to determine hypothetical values for potentially faulty objects, and may include conditional functions not normally considered to have inverses.
Neuronal Reward and Decision Signals: From Theories to Data
Schultz, Wolfram
2015-01-01
Rewards are crucial objects that induce learning, approach behavior, choices, and emotions. Whereas emotions are difficult to investigate in animals, the learning function is mediated by neuronal reward prediction error signals which implement basic constructs of reinforcement learning theory. These signals are found in dopamine neurons, which emit a global reward signal to striatum and frontal cortex, and in specific neurons in striatum, amygdala, and frontal cortex projecting to select neuronal populations. The approach and choice functions involve subjective value, which is objectively assessed by behavioral choices eliciting internal, subjective reward preferences. Utility is the formal mathematical characterization of subjective value and a prime decision variable in economic choice theory. It is coded as utility prediction error by phasic dopamine responses. Utility can incorporate various influences, including risk, delay, effort, and social interaction. Appropriate for formal decision mechanisms, rewards are coded as object value, action value, difference value, and chosen value by specific neurons. Although all reward, reinforcement, and decision variables are theoretical constructs, their neuronal signals constitute measurable physical implementations and as such confirm the validity of these concepts. The neuronal reward signals provide guidance for behavior while constraining the free will to act. PMID:26109341
Gjersoe, Nathalia L.; Newman, George E.; Chituc, Vladimir; Hood, Bruce
2014-01-01
The current studies examine how valuation of authentic items varies as a function of culture. We find that U.S. respondents value authentic items associated with individual persons (a sweater or an artwork) more than Indian respondents, but that both cultures value authentic objects not associated with persons (a dinosaur bone or a moon rock) equally. These differences cannot be attributed to more general cultural differences in the value assigned to authenticity. Rather, the results support the hypothesis that individualistic cultures place a greater value on objects associated with unique persons and in so doing, offer the first evidence for how valuation of certain authentic items may vary cross-culturally. PMID:24658437
Gjersoe, Nathalia L; Newman, George E; Chituc, Vladimir; Hood, Bruce
2014-01-01
The current studies examine how valuation of authentic items varies as a function of culture. We find that U.S. respondents value authentic items associated with individual persons (a sweater or an artwork) more than Indian respondents, but that both cultures value authentic objects not associated with persons (a dinosaur bone or a moon rock) equally. These differences cannot be attributed to more general cultural differences in the value assigned to authenticity. Rather, the results support the hypothesis that individualistic cultures place a greater value on objects associated with unique persons and in so doing, offer the first evidence for how valuation of certain authentic items may vary cross-culturally.
[Functional assessment of patients with vertigo and dizziness in occupational medicine].
Zamysłowska-Szmytke, Ewa; Szostek-Rogula, Sylwia; Śliwińska-Kowalska, Mariola
2018-03-09
Balance assessment relies on symptoms, clinical examination and functional assessment and their verification in objective tests. Our study was aimed at calculating the assessment compatibility between questionnaires, functional scales and objective vestibular and balance examinations. A group of 131 patients (including 101 women; mean age: 59±14 years) of the audiology outpatient clinic was examined. Benign paroxysmal positional vertigo, phobic vertigo and central dizziness were the most common diseases observed in the study group. Patients' symptoms were tested using the questionnaire on Cawthworne-Cooksey exercises (CC), Dizziness Handicap Inventory (DHI) and Duke Anxiety-Depression Scale. Berg Balance Scale (BBS), Dynamic Gait Index (DGI), the Tinetti test, Timed Up and Go test (TUG), and Dynamic Visual Acuity (DVA) were used for the functional balance assessment. Objective evaluation included: videonystagmography caloric test and static posturography. The study results revealed statistically significant but moderate compatibility between functional tests BBS, DGI, TUG, DVA and caloric results (Kendall's W = 0.29) and higher for posturography (W = 0.33). The agreement between questionnaires and objective tests were very low (W = 0.08-0.11).The positive predictive values of BBS were 42% for caloric and 62% for posturography tests, of DGI - 46% and 57%, respectively. The results of functional tests (BBS, DGI, TUG, DVA) revealed statistically significant correlations with objective balance tests but low predictive values did not allow to use these tests in vestibular damage screening. Only half of the patients with functional disturbances revealed abnormal caloric or posturography tests. The qualification to work based on objective tests ignore functional state of the worker, which may influence the ability to work. Med Pr 2018;69(2):179-189. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
2009-11-17
set of chains , the step adds scheduled methods that have an a priori likelihood of a failure outcome (Lines 3-5). It identifies the max eul value of the...activity meeting its objective, as well as its expected contribution to the schedule. By explicitly calculating these values , PADS is able to summarize the...variables. One of the main difficulties of this model is convolving the probability density functions and value functions while solving the model; this
Object of desire self-consciousness theory.
Bogaert, Anthony F; Brotto, Lori A
2014-01-01
In this article, the authors discuss the construct of object of desire self-consciousness, the perception that one is romantically and sexually desirable in another's eyes. The authors discuss the nature of the construct, variations in its expression, and how it may function as part of a self-schemata or script related to romance and sexuality. The authors suggest that object of desire self-consciousness may be an adaptive, evolved psychological mechanism allowing sexual and romantic tactics suitable to one's mate value. The authors also suggest that it can act as a signal that one has high mate value in the sexual marketplace. The authors then review literature (e.g., on fantasies, on sexual activity preferences, on sexual dysfunctions, on language) suggesting that object of desire self-consciousness plays a particularly important role in heterosexual women's sexual/romantic functioning and desires.
Beyer, Hans-Georg
2014-01-01
The convergence behaviors of so-called natural evolution strategies (NES) and of the information-geometric optimization (IGO) approach are considered. After a review of the NES/IGO ideas, which are based on information geometry, the implications of this philosophy w.r.t. optimization dynamics are investigated considering the optimization performance on the class of positive quadratic objective functions (the ellipsoid model). Exact differential equations describing the approach to the optimizer are derived and solved. It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer. This is the real reason why state of the art implementations of IGO algorithms optimize the expected value of transformed objective functions, for example, by utility functions based on ranking. It is shown that these utility functions are localized fitness functions that change during the IGO flow. The governing differential equations describing this flow are derived. In the case of convergence, the solutions to these equations exhibit an exponentially fast approach to the optimizer (i.e., linear convergence order). Furthermore, it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit-up to a scalar factor-the inverse of the Hessian of the objective function considered.
Diagnostic Testing for Fecal Incontinence
Olson, Craig H.
2014-01-01
Many tests are available to assist in the diagnosis and management of fecal incontinence. Imaging studies such as endoanal ultrasonography and defecography provide an anatomic and functional picture of the anal canal which can be useful, especially in the setting of planned sphincter repair. Physiologic tests including anal manometry and anal acoustic reflexometry provide objective data regarding functional values of the anal canal. The value of this information is of some debate; however, as we learn more about these methods, they may prove useful in the future. Finally, nerve studies, such as pudendal motor nerve terminal latency, evaluate the function of the innervation of the anal canal. This has been shown to have significant prognostic value and can help guide clinical decision making. Significant advances have also happened in the field, with the relatively recent advent of magnetic resonance defecography and high-resolution anal manometry, which provide even greater objective anatomic and physiologic information about the anal canal and its function. PMID:25320566
Variability in perceived satisfaction of reservoir management objectives
Owen, W.J.; Gates, T.K.; Flug, M.
1997-01-01
Fuzzy set theory provides a useful model to address imprecision in interpreting linguistically described objectives for reservoir management. Fuzzy membership functions can be used to represent degrees of objective satisfaction for different values of management variables. However, lack of background information, differing experiences and qualifications, and complex interactions of influencing factors can contribute to significant variability among membership functions derived from surveys of multiple experts. In the present study, probabilistic membership functions are used to model variability in experts' perceptions of satisfaction of objectives for hydropower generation, fish habitat, kayaking, rafting, and scenery preservation on the Green River through operations of Flaming Gorge Dam. Degree of variability in experts' perceptions differed among objectives but resulted in substantial uncertainty in estimation of optimal reservoir releases.
Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem
NASA Astrophysics Data System (ADS)
Omagari, Hiroki; Higashino, Shin-Ichiro
2018-04-01
In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.
NASA Astrophysics Data System (ADS)
Bonissone, Stefano R.; Subbu, Raj
2002-12-01
In multi-objective optimization (MOO) problems we need to optimize many possibly conflicting objectives. For instance, in manufacturing planning we might want to minimize the cost and production time while maximizing the product's quality. We propose the use of evolutionary algorithms (EAs) to solve these problems. Solutions are represented as individuals in a population and are assigned scores according to a fitness function that determines their relative quality. Strong solutions are selected for reproduction, and pass their genetic material to the next generation. Weak solutions are removed from the population. The fitness function evaluates each solution and returns a related score. In MOO problems, this fitness function is vector-valued, i.e. it returns a value for each objective. Therefore, instead of a global optimum, we try to find the Pareto-optimal or non-dominated frontier. We use multi-sexual EAs with as many genders as optimization criteria. We have created new crossover and gender assignment functions, and experimented with various parameters to determine the best setting (yielding the highest number of non-dominated solutions.) These experiments are conducted using a variety of fitness functions, and the algorithms are later evaluated on a flexible manufacturing problem with total cost and time minimization objectives.
Does human perception of wetland aesthetics and healthiness relate to ecological functioning?
Cottet, Marylise; Piégay, Hervé; Bornette, Gudrun
2013-10-15
Wetland management usually aims at preserving or restoring desirable ecological characteristics or functions. It is now well-recognized that some social criteria should also be included. Involving lay-people in wetland preservation or restoration projects may mean broadening project objectives to fit various and potentially competing requirements that relate to ecology, aesthetics, recreation, etc. In addition, perceived value depends both upon expertise and objectives, both of which vary from one stakeholder population to another. Perceived value and ecological functioning have to be reconciled in order to make a project successful. Understanding the perceptions of lay-people as well as their opinions about ecological value is a critical part of the development of sustainable management plans. Characterizing the environment in a way that adequately describes ecological function while also being consistent with lay perception may help reach such objectives. This goal has been addressed in a case study relating to wetlands of the Ain River (France). A photo-questionnaire presenting a sample of photographs of riverine wetlands distributed along the Ain River was submitted to 403 lay-people and self-identified experts. Two objectives were defined: (1) to identify the different parameters, whether visual or ecological, influencing the perception regarding the value of these ecosystems; (2) to compare the perceptions of self-identified experts and lay-people. Four criteria appear to strongly influence peoples' perceptions of ecological and aesthetical values: water transparency and colour, the presence and appearance of aquatic vegetation, the presence of sediments, and finally, trophic status. In our study, we observed only a few differences in perception. The differences primarily related to the value assigned to oligotrophic wetlands but even here, the differences between lay and expert populations were minimal. These results support the idea that it is possible to implement an integrated and participative management program for ecosystems. Our approach can provide a shared view of environmental value facilitating the work of managers in defining comprehensive goals for wetland preservation or restoration projects. Copyright © 2013 Elsevier Ltd. All rights reserved.
Distributed Control by Lagrangian Steepest Descent
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Bieniawski, Stefan
2004-01-01
Often adaptive, distributed control can be viewed as an iterated game between independent players. The coupling between the players mixed strategies, arising as the system evolves from one instant to the next, is determined by the system designer. Information theory tells us that the most likely joint strategy of the players, given a value of the expectation of the overall control objective function, is the minimizer of a function o the joint strategy. So the goal of the system designer is to speed evolution of the joint strategy to that Lagrangian mhimbhgpoint,lowerthe expectated value of the control objective function, and repeat Here we elaborate the theory of algorithms that do this using local descent procedures, and that thereby achieve efficient, adaptive, distributed control.
Numerical Optimization Using Computer Experiments
NASA Technical Reports Server (NTRS)
Trosset, Michael W.; Torczon, Virginia
1997-01-01
Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.
Four Common Simplifications of Multi-Criteria Decision Analysis do not hold for River Rehabilitation
2016-01-01
River rehabilitation aims at alleviating negative effects of human impacts such as loss of biodiversity and reduction of ecosystem services. Such interventions entail difficult trade-offs between different ecological and often socio-economic objectives. Multi-Criteria Decision Analysis (MCDA) is a very suitable approach that helps assessing the current ecological state and prioritizing river rehabilitation measures in a standardized way, based on stakeholder or expert preferences. Applications of MCDA in river rehabilitation projects are often simplified, i.e. using a limited number of objectives and indicators, assuming linear value functions, aggregating individual indicator assessments additively, and/or assuming risk neutrality of experts. Here, we demonstrate an implementation of MCDA expert preference assessments to river rehabilitation and provide ample material for other applications. To test whether the above simplifications reflect common expert opinion, we carried out very detailed interviews with five river ecologists and a hydraulic engineer. We defined essential objectives and measurable quality indicators (attributes), elicited the experts´ preferences for objectives on a standardized scale (value functions) and their risk attitude, and identified suitable aggregation methods. The experts recommended an extensive objectives hierarchy including between 54 and 93 essential objectives and between 37 to 61 essential attributes. For 81% of these, they defined non-linear value functions and in 76% recommended multiplicative aggregation. The experts were risk averse or risk prone (but never risk neutral), depending on the current ecological state of the river, and the experts´ personal importance of objectives. We conclude that the four commonly applied simplifications clearly do not reflect the opinion of river rehabilitation experts. The optimal level of model complexity, however, remains highly case-study specific depending on data and resource availability, the context, and the complexity of the decision problem. PMID:26954353
Correction to Kreuzbauer, King, and Basu (2015).
2015-08-01
Reports an error in "The Mind in the Object-Psychological Valuation of Materialized Human Expression" by Robert Kreuzbauer, Dan King and Shankha Basu (Journal of Experimental Psychology: General, Advanced Online Publication, Jun 15, 2015, np). In the article the labels on the X-axis of Figure 1 "Remove Variance" and "Preserve Variance" should be switched. (The following abstract of the original article appeared in record 2015-26264-001.) Symbolic material objects such as art or certain artifacts (e.g., fine pottery, jewelry) share one common element: The combination of generating an expression, and the materialization of this expression in the object. This explains why people place a much greater value on handmade over machine-made objects, and originals over duplicates. We show that this mechanism occurs when a material object's symbolic property is salient and when the creator (artist or craftsman) is perceived to have agency control over the 1-to-1 materialized expression in the object. Coactivation of these 2 factors causes the object to be perceived as having high value because it is seen as the embodied representation of the creator's unique personal expression. In 6 experiments, subjects rated objects in various object categories, which varied on the type of object property (symbolic, functional, aesthetic), the production procedure (handmade, machine-made, analog, digital) and the origin of the symbolic information (person or software). The studies showed that the proposed mechanism applies to symbolic, but not to functional or aesthetic material objects. Furthermore, they show that this specific form of symbolic object valuation could not be explained by various other related psychological theories (e.g., uniqueness, scarcity, physical touching, creative performance). Our research provides a universal framework that identifies a core mechanism for explaining judgments of value for one of our most uniquely human symbolic object categories. (c) 2015 APA, all rights reserved).
The mind in the object-Psychological valuation of materialized human expression.
Kreuzbauer, Robert; King, Dan; Basu, Shankha
2015-08-01
[Correction Notice: An Erratum for this article was reported in Vol 144(4) of Journal of Experimental Psychology: General (see record 2015-33206-002). In the article the labels on the X-axis of Figure 1 "Remove Variance" and "Preserve Variance" should be switched.] Symbolic material objects such as art or certain artifacts (e.g., fine pottery, jewelry) share one common element: The combination of generating an expression, and the materialization of this expression in the object. This explains why people place a much greater value on handmade over machine-made objects, and originals over duplicates. We show that this mechanism occurs when a material object's symbolic property is salient and when the creator (artist or craftsman) is perceived to have agency control over the 1-to-1 materialized expression in the object. Coactivation of these 2 factors causes the object to be perceived as having high value because it is seen as the embodied representation of the creator's unique personal expression. In 6 experiments, subjects rated objects in various object categories, which varied on the type of object property (symbolic, functional, aesthetic), the production procedure (handmade, machine-made, analog, digital) and the origin of the symbolic information (person or software). The studies showed that the proposed mechanism applies to symbolic, but not to functional or aesthetic material objects. Furthermore, they show that this specific form of symbolic object valuation could not be explained by various other related psychological theories (e.g., uniqueness, scarcity, physical touching, creative performance). Our research provides a universal framework that identifies a core mechanism for explaining judgments of value for one of our most uniquely human symbolic object categories. (c) 2015 APA, all rights reserved).
Angle restriction enhances synchronization of self-propelled objects.
Gao, Jianxi; Havlin, Shlomo; Xu, Xiaoming; Stanley, H Eugene
2011-10-01
Understanding the synchronization process of self-propelled objects is of great interest in science and technology. We propose a synchronization model for a self-propelled objects system in which we restrict the maximal angle change of each object to θ(R). At each time step, each object moves and changes its direction according to the average direction of all of its neighbors (including itself). If the angle change is greater than a cutoff angle θ(R), the change is replaced by θ(R). We find that (i) counterintuitively, the synchronization improves significantly when θ(R) decreases, (ii) there exists a critical restricted angle θ(Rc) at which the synchronization order parameter changes from a large value to a small value, and (iii) for each noise amplitude η, the synchronization as a function of θ(R) shows a maximum value, indicating the existence of an optimal θ(R) that yields the best synchronization for every η.
Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink
NASA Astrophysics Data System (ADS)
Ozana, Stepan; Pies, Martin; Docekal, Tomas
2016-06-01
LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.
Statistical procedures for evaluating daily and monthly hydrologic model predictions
Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.
2004-01-01
The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.
Statistical Mechanics of Node-perturbation Learning with Noisy Baseline
NASA Astrophysics Data System (ADS)
Hara, Kazuyuki; Katahira, Kentaro; Okada, Masato
2017-02-01
Node-perturbation learning is a type of statistical gradient descent algorithm that can be applied to problems where the objective function is not explicitly formulated, including reinforcement learning. It estimates the gradient of an objective function by using the change in the object function in response to the perturbation. The value of the objective function for an unperturbed output is called a baseline. Cho et al. proposed node-perturbation learning with a noisy baseline. In this paper, we report on building the statistical mechanics of Cho's model and on deriving coupled differential equations of order parameters that depict learning dynamics. We also show how to derive the generalization error by solving the differential equations of order parameters. On the basis of the results, we show that Cho's results are also apply in general cases and show some general performances of Cho's model.
SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazareth, D; Spaans, J
Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objectivemore » function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.« less
Parallel basal ganglia circuits for voluntary and automatic behaviour to reach rewards
Hikosaka, Okihide
2015-01-01
The basal ganglia control body movements, value processing and decision-making. Many studies have shown that the inputs and outputs of each basal ganglia structure are topographically organized, which suggests that the basal ganglia consist of separate circuits that serve distinct functions. A notable example is the circuits that originate from the rostral (head) and caudal (tail) regions of the caudate nucleus, both of which target the superior colliculus. These two caudate regions encode the reward values of visual objects differently: flexible (short-term) values by the caudate head and stable (long-term) values by the caudate tail. These value signals in the caudate guide the orienting of gaze differently: voluntary saccades by the caudate head circuit and automatic saccades by the caudate tail circuit. Moreover, separate groups of dopamine neurons innervate the caudate head and tail and may selectively guide the flexible and stable learning/memory in the caudate regions. Studies focusing on manual handling of objects also suggest that rostrocaudally separated circuits in the basal ganglia control the action differently. These results suggest that the basal ganglia contain parallel circuits for two steps of goal-directed behaviour: finding valuable objects and manipulating the valuable objects. These parallel circuits may underlie voluntary behaviour and automatic skills, enabling animals (including humans) to adapt to both volatile and stable environments. This understanding of the functions and mechanisms of the basal ganglia parallel circuits may inform the differential diagnosis and treatment of basal ganglia disorders. PMID:25981958
ERIC Educational Resources Information Center
Florida Univ., Gainesville. Coll. of Education.
The values, beliefs, and objectives that form the core of the program at the Experimental School P.K. Yonge in the University of Florida are presented in this paper which is written in Spanish. This experimental school serves approximately 900 students from grades one through twelve. The function of the school is to conduct research to solve…
Wang, Zhen; Li, Ru; Yu, Guolin
2017-01-01
In this work, several extended approximately invex vector-valued functions of higher order involving a generalized Jacobian are introduced, and some examples are presented to illustrate their existences. The notions of higher-order (weak) quasi-efficiency with respect to a function are proposed for a multi-objective programming. Under the introduced generalization of higher-order approximate invexities assumptions, we prove that the solutions of generalized vector variational-like inequalities in terms of the generalized Jacobian are the generalized quasi-efficient solutions of nonsmooth multi-objective programming problems. Moreover, the equivalent conditions are presented, namely, a vector critical point is a weakly quasi-efficient solution of higher order with respect to a function.
Creating Multi Objective Value Functions from Non-Independent Values
2009-03-01
1998) or oil companies trying to capitalize on the increasing flood of available data and statistics ( Coopersmith , Dean, McVean, & Storaune, 2001...Clemen, R. T., & Reilly, T. (2001). Making Hard Decisions. Pacific Grove: Duxbury. Coopersmith , E., Dean, G., McVean, J., & Storaune, E. (2001
Desired Precision in Multi-Objective Optimization: Epsilon Archiving or Rounding Objectives?
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Sahraei, S.
2016-12-01
Multi-objective optimization (MO) aids in supporting the decision making process in water resources engineering and design problems. One of the main goals of solving a MO problem is to archive a set of solutions that is well-distributed across a wide range of all the design objectives. Modern MO algorithms use the epsilon dominance concept to define a mesh with pre-defined grid-cell size (often called epsilon) in the objective space and archive at most one solution at each grid-cell. Epsilon can be set to the desired precision level of each objective function to make sure that the difference between each pair of archived solutions is meaningful. This epsilon archiving process is computationally expensive in problems that have quick-to-evaluate objective functions. This research explores the applicability of a similar but computationally more efficient approach to respect the desired precision level of all objectives in the solution archiving process. In this alternative approach each objective function is rounded to the desired precision level before comparing any new solution to the set of archived solutions that already have rounded objective function values. This alternative solution archiving approach is compared to the epsilon archiving approach in terms of efficiency and quality of archived solutions for solving mathematical test problems and hydrologic model calibration problems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... determine the actuarial value of the assets of a pension plan. Actuarial gain and loss means the effect on... cost accounting period. Cost objective means (except for subpart 31.6) a function, organizational... operations. It usually performs management, supervisory, or administrative functions, and may also perform...
Code of Federal Regulations, 2012 CFR
2012-10-01
... determine the actuarial value of the assets of a pension plan. Actuarial gain and loss means the effect on... cost accounting period. Cost objective means (except for subpart 31.6) a function, organizational... operations. It usually performs management, supervisory, or administrative functions, and may also perform...
Code of Federal Regulations, 2014 CFR
2014-10-01
... determine the actuarial value of the assets of a pension plan. Actuarial gain and loss means the effect on... cost accounting period. Cost objective means (except for subpart 31.6) a function, organizational... operations. It usually performs management, supervisory, or administrative functions, and may also perform...
Wang, Yong; Wang, Bing-Chuan; Li, Han-Xiong; Yen, Gary G
2016-12-01
When solving constrained optimization problems by evolutionary algorithms, an important issue is how to balance constraints and objective function. This paper presents a new method to address the above issue. In our method, after generating an offspring for each parent in the population by making use of differential evolution (DE), the well-known feasibility rule is used to compare the offspring and its parent. Since the feasibility rule prefers constraints to objective function, the objective function information has been exploited as follows: if the offspring cannot survive into the next generation and if the objective function value of the offspring is better than that of the parent, then the offspring is stored into a predefined archive. Subsequently, the individuals in the archive are used to replace some individuals in the population according to a replacement mechanism. Moreover, a mutation strategy is proposed to help the population jump out of a local optimum in the infeasible region. Note that, in the replacement mechanism and the mutation strategy, the comparison of individuals is based on objective function. In addition, the information of objective function has also been utilized to generate offspring in DE. By the above processes, this paper achieves an effective balance between constraints and objective function in constrained evolutionary optimization. The performance of our method has been tested on two sets of benchmark test functions, namely, 24 test functions at IEEE CEC2006 and 18 test functions with 10-D and 30-D at IEEE CEC2010. The experimental results have demonstrated that our method shows better or at least competitive performance against other state-of-the-art methods. Furthermore, the advantage of our method increases with the increase of the number of decision variables.
Direct and indirect pathways for choosing objects and actions.
Hikosaka, Okihide; Kim, Hyoung F; Amita, Hidetoshi; Yasuda, Masaharu; Isoda, Masaki; Tachibana, Yoshihisa; Yoshida, Atsushi
2018-02-23
A prominent target of the basal ganglia is the superior colliculus (SC) which controls gaze orientation (saccadic eye movement in primates) to an important object. This 'object choice' is crucial for choosing an action on the object. SC is innervated by the substantia nigra pars reticulata (SNr) which is controlled mainly by the caudate nucleus (CD). This CD-SNr-SC circuit is sensitive to the values of individual objects and facilitates saccades to good objects. The object values are processed differently in two parallel circuits: flexibly by the caudate head (CDh) and stably by the caudate tail (CDt). To choose good objects, we need to reject bad objects. In fact, these contrasting functions are accomplished by the circuit originating from CDt: The direct pathway focuses on good objects and facilitates saccades to them; the indirect pathway focuses on bad objects and suppresses saccades to them. Inactivation of CDt deteriorated the object choice, because saccades to bad objects were no longer suppressed. This suggests that the indirect pathway is important for object choice. However, the direct and indirect pathways for 'object choice', which aim at the same action (i.e., saccade), may not work for 'action choice'. One possibility is that circuits controlling different actions are connected through the indirect pathway. Additional connections of the indirect pathway with brain areas outside the basal ganglia may also provide a wider range of behavioral choice. In conclusion, basal ganglia circuits are composed of the basic direct/indirect pathways and additional connections and thus have acquired multiple functions. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Mehri, Mehran
2014-07-01
The optimization algorithm of a model may have significant effects on the final optimal values of nutrient requirements in poultry enterprises. In poultry nutrition, the optimal values of dietary essential nutrients are very important for feed formulation to optimize profit through minimizing feed cost and maximizing bird performance. This study was conducted to introduce a novel multi-objective algorithm, desirability function, for optimization the bird response models based on response surface methodology (RSM) and artificial neural network (ANN). The growth databases on the central composite design (CCD) were used to construct the RSM and ANN models and optimal values for 3 essential amino acids including lysine, methionine, and threonine in broiler chicks have been reevaluated using the desirable function in both analytical approaches from 3 to 16 d of age. Multi-objective optimization results showed that the most desirable function was obtained for ANN-based model (D = 0.99) where the optimal levels of digestible lysine (dLys), digestible methionine (dMet), and digestible threonine (dThr) for maximum desirability were 13.2, 5.0, and 8.3 g/kg of diet, respectively. However, the optimal levels of dLys, dMet, and dThr in the RSM-based model were estimated at 11.2, 5.4, and 7.6 g/kg of diet, respectively. This research documented that the application of ANN in the broiler chicken model along with a multi-objective optimization algorithm such as desirability function could be a useful tool for optimization of dietary amino acids in fractional factorial experiments, in which the use of the global desirability function may be able to overcome the underestimations of dietary amino acids resulting from the RSM model. © 2014 Poultry Science Association Inc.
Hikosaka, Okihide
2014-01-01
Gaze is strongly attracted to visual objects that have been associated with rewards. Key to this function is a basal ganglia circuit originating from the caudate nucleus (CD), mediated by the substantia nigra pars reticulata (SNr), and aiming at the superior colliculus (SC). Notably, subregions of CD encode values of visual objects differently: stably by CD tail [CD(T)] vs. flexibly by CD head [CD(H)]. Are the stable and flexible value signals processed separately throughout the CD-SNr-SC circuit? To answer this question, we identified SNr neurons by their inputs from CD and outputs to SC and examined their sensitivity to object values. The direct input from CD was identified by SNr neuron's inhibitory response to electrical stimulation of CD. We found that SNr neurons were separated into two groups: 1) neurons inhibited by CD(T) stimulation, located in the caudal-dorsal-lateral SNr (cdlSNr), and 2) neurons inhibited by CD(H) stimulation, located in the rostral-ventral-medial SNr (rvmSNr). Most of CD(T)-recipient SNr neurons encoded stable values, whereas CD(H)-recipient SNr neurons tended to encode flexible values. The output to SC was identified by SNr neuron's antidromic response to SC stimulation. Among the antidromically activated neurons, many encoded only stable values, while some encoded only flexible values. These results suggest that CD(T)-cdlSNr-SC circuit and CD(H)-rvmSNr-SC circuit transmit stable and flexible value signals, largely separately, to SC. The speed of signal transmission was faster through CD(T)-cdlSNr-SC circuit than through CD(H)-rvmSNr-SC circuit, which may reflect automatic and controlled gaze orienting guided by these circuits. PMID:25540224
NASA Astrophysics Data System (ADS)
Lachhwani, Kailash; Poonia, Mahaveer Prasad
2012-08-01
In this paper, we show a procedure for solving multilevel fractional programming problems in a large hierarchical decentralized organization using fuzzy goal programming approach. In the proposed method, the tolerance membership functions for the fuzzily described numerator and denominator part of the objective functions of all levels as well as the control vectors of the higher level decision makers are respectively defined by determining individual optimal solutions of each of the level decision makers. A possible relaxation of the higher level decision is considered for avoiding decision deadlock due to the conflicting nature of objective functions. Then, fuzzy goal programming approach is used for achieving the highest degree of each of the membership goal by minimizing negative deviational variables. We also provide sensitivity analysis with variation of tolerance values on decision vectors to show how the solution is sensitive to the change of tolerance values with the help of a numerical example.
Katsevich, Alexander J.; Ramm, Alexander G.
1996-01-01
Local tomographic data is used to determine the location and value of a discontinuity between a first internal density of an object and a second density of a region within the object. A beam of radiation is directed in a predetermined pattern through the region of the object containing the discontinuity. Relative attenuation data of the beam is determined within the predetermined pattern having a first data component that includes attenuation data through the region. The relative attenuation data is input to a pseudo-local tomography function, where the difference between the internal density and the pseudo-local tomography function is computed across the discontinuity. The pseudo-local tomography function outputs the location of the discontinuity and the difference in density between the first density and the second density.
Katsevich, A.J.; Ramm, A.G.
1996-07-23
Local tomographic data is used to determine the location and value of a discontinuity between a first internal density of an object and a second density of a region within the object. A beam of radiation is directed in a predetermined pattern through the region of the object containing the discontinuity. Relative attenuation data of the beam is determined within the predetermined pattern having a first data component that includes attenuation data through the region. The relative attenuation data is input to a pseudo-local tomography function, where the difference between the internal density and the pseudo-local tomography function is computed across the discontinuity. The pseudo-local tomography function outputs the location of the discontinuity and the difference in density between the first density and the second density. 7 figs.
Multi-objective optimization for model predictive control.
Wojsznis, Willy; Mehta, Ashish; Wojsznis, Peter; Thiele, Dirk; Blevins, Terry
2007-06-01
This paper presents a technique of multi-objective optimization for Model Predictive Control (MPC) where the optimization has three levels of the objective function, in order of priority: handling constraints, maximizing economics, and maintaining control. The greatest weights are assigned dynamically to control or constraint variables that are predicted to be out of their limits. The weights assigned for economics have to out-weigh those assigned for control objectives. Control variables (CV) can be controlled at fixed targets or within one- or two-sided ranges around the targets. Manipulated Variables (MV) can have assigned targets too, which may be predefined values or current actual values. This MV functionality is extremely useful when economic objectives are not defined for some or all the MVs. To achieve this complex operation, handle process outputs predicted to go out of limits, and have a guaranteed solution for any condition, the technique makes use of the priority structure, penalties on slack variables, and redefinition of the constraint and control model. An engineering implementation of this approach is shown in the MPC embedded in an industrial control system. The optimization and control of a distillation column, the standard Shell heavy oil fractionator (HOF) problem, is adequately achieved with this MPC.
Exploration of Objective Functions for Optimal Placement of Weather Stations
NASA Astrophysics Data System (ADS)
Snyder, A.; Dietterich, T.; Selker, J. S.
2016-12-01
Many regions of Earth lack ground-based sensing of weather variables. For example, most countries in Sub-Saharan Africa do not have reliable weather station networks. This absence of sensor data has many consequences ranging from public safety (poor prediction and detection of severe weather events), to agriculture (lack of crop insurance), to science (reduced quality of world-wide weather forecasts, climate change measurement, etc.). The Trans-African Hydro-Meteorological Observatory (TAHMO.org) project seeks to address these problems by deploying and operating a large network of weather stations throughout Sub-Saharan Africa. To design the TAHMO network, we must determine where to locate each weather station. We can formulate this as the following optimization problem: Determine a set of N sites that jointly optimize the value of an objective function. The purpose of this poster is to propose and assess several objective functions. In addition to standard objectives (e.g., minimizing the summed squared error of interpolated values over the entire region), we consider objectives that minimize the maximum error over the region and objectives that optimize the detection of extreme events. An additional issue is that each station measures more than 10 variables—how should we balance the accuracy of our interpolated maps for each variable? Weather sensors inevitably drift out of calibration or fail altogether. How can we incorporate robustness to failed sensors into our network design? Another important requirement is that the network should make it possible to detect failed sensors by comparing their readings with those of other stations. How can this requirement be met? Finally, we provide an initial assessment of the computational cost of optimizing these various objective functions. We invite everyone to join the discussion at our poster by proposing additional objectives, identifying additional issues to consider, and expanding our bibliography of relevant papers. A prize (derived from grapes grown in Oregon) will be awarded for the most insightful contribution to the discussion!
A conceptual DFT study of the molecular properties of glycating carbonyl compounds.
Frau, Juan; Glossman-Mitnik, Daniel
2017-01-01
Several glycating carbonyl compounds have been studied by resorting to the latest Minnesota family of density functional with the objective of determinating their molecular properties. In particular, the chemical reactivity descriptors that arise from conceptual density functional theory and chemical reactivity theory have been calculated through a [Formula: see text]SCF protocol. The validity of the KID (Koopmans' in DFT) procedure has been checked by comparing the reactivity descriptors obtained from the values of the HOMO and LUMO with those calculated through vertical energy values. The reactivity sites have been determined by means of the calculation of the Fukui function indices, the condensed dual descriptor [Formula: see text] and the electrophilic and nucleophilic Parr functions. The glycating power of the studied compounds have been compared with the same property for simple carbohydrates.Graphical abstractSeveral glycating carbonyl compounds have been studied by resorting to the latest Minnesota family of density functional with the objective of determinating their molecular properties, the chemical reactivity descriptors and the validity of the KID (Koopmans' in DFT) procedure.
Functional Requirements: 2014 No Child Left Behind--Annual Measurable Achievement Objectives
ERIC Educational Resources Information Center
Minnesota Department of Education, 2014
2014-01-01
This document describes the Minnesota No Child Left Behind (NCLB) calculation as it relates to measuring Title III districts for Annual Measurable Achievement Objectives (AMAO). In 2012, a new assessment was used to measure language proficiency skills for English Learners. New AMAO targets were created, and new values for determining individual…
Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment
NASA Astrophysics Data System (ADS)
Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang
2016-11-01
A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.
Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G
2013-10-01
Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.
Multi-objective possibilistic model for portfolio selection with transaction cost
NASA Astrophysics Data System (ADS)
Jana, P.; Roy, T. K.; Mazumder, S. K.
2009-06-01
In this paper, we introduce the possibilistic mean value and variance of continuous distribution, rather than probability distributions. We propose a multi-objective Portfolio based model and added another entropy objective function to generate a well diversified asset portfolio within optimal asset allocation. For quantifying any potential return and risk, portfolio liquidity is taken into account and a multi-objective non-linear programming model for portfolio rebalancing with transaction cost is proposed. The models are illustrated with numerical examples.
Least-Squares, Continuous Sensitivity Analysis for Nonlinear Fluid-Structure Interaction
2009-08-20
Tangential stress optimization convergence to uniform value 1.797 as a function of eccentric anomaly E and Objective function value as a...up to the domain dimension, domainn . Equation (3.7) expands as truncation error round-off error decreasing step size FD e rr or 54...force, and E is Young’s modulus. Equations (3.31) and (3.32) may be directly integrated to yield the stress and displacement solutions, which, for no
Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad
2018-02-01
The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Using multi-species occupancy models in structured decision making on managed lands
Sauer, John R.; Blank, Peter J.; Zipkin, Elise F.; Fallon, Jane E.; Fallon, Frederick W.
2013-01-01
Land managers must balance the needs of a variety of species when manipulating habitats. Structured decision making provides a systematic means of defining choices and choosing among alternative management options; implementation of a structured decision requires quantitative approaches to predicting consequences of management on the relevant species. Multi-species occupancy models provide a convenient framework for making structured decisions when the management objective is focused on a collection of species. These models use replicate survey data that are often collected on managed lands. Occupancy can be modeled for each species as a function of habitat and other environmental features, and Bayesian methods allow for estimation and prediction of collective responses of groups of species to alternative scenarios of habitat management. We provide an example of this approach using data from breeding bird surveys conducted in 2008 at the Patuxent Research Refuge in Laurel, Maryland, evaluating the effects of eliminating meadow and wetland habitats on scrub-successional and woodland-breeding bird species using summed total occupancy of species as an objective function. Removal of meadows and wetlands decreased value of an objective function based on scrub-successional species by 23.3% (95% CI: 20.3–26.5), but caused only a 2% (0.5, 3.5) increase in value of an objective function based on woodland species, documenting differential effects of elimination of meadows and wetlands on these groups of breeding birds. This approach provides a useful quantitative tool for managers interested in structured decision making.
Meta-heuristic algorithm to solve two-sided assembly line balancing problems
NASA Astrophysics Data System (ADS)
Wirawan, A. D.; Maruf, A.
2016-02-01
Two-sided assembly line is a set of sequential workstations where task operations can be performed at two sides of the line. This type of line is commonly used for the assembly of large-sized products: cars, buses, and trucks. This paper propose a Decoding Algorithm with Teaching-Learning Based Optimization (TLBO), a recently developed nature-inspired search method to solve the two-sided assembly line balancing problem (TALBP). The algorithm aims to minimize the number of mated-workstations for the given cycle time without violating the synchronization constraints. The correlation between the input parameters and the emergence point of objective function value is tested using scenarios generated by design of experiments. A two-sided assembly line operated in an Indonesia's multinational manufacturing company is considered as the object of this paper. The result of the proposed algorithm shows reduction of workstations and indicates that there is negative correlation between the emergence point of objective function value and the size of population used.
Multidisciplinary design optimization using genetic algorithms
NASA Technical Reports Server (NTRS)
Unal, Resit
1994-01-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared with efficient gradient methods. Applicaiton of GA is underway for a cost optimization study for a launch-vehicle fuel-tank and structural design of a wing. The strengths and limitations of GA for launch vehicle design optimization is studied.
Rudebeck, Peter H; Murray, Elisabeth A
2011-12-01
The primate orbitofrontal cortex (OFC) is often treated as a single entity, but architectonic and connectional neuroanatomy indicate that it has distinguishable parts. Nevertheless, few studies have attempted to dissociate the functions of its subregions. Here we review findings from recent neuropsychological and neurophysiological studies that do so. The lateral OFC seems to be important for learning, representing, and updating specific object-reward associations. The medial OFC seems to be important for value comparisons and choosing among objects on that basis. Rather than viewing this dissociation of function in terms of learning versus choosing, however, we suggest that it reflects the distinction between contrasts and comparisons: differences versus similarities. Making use of high-dimensional representations that arise from the convergence of several sensory modalities, the lateral OFC encodes contrasts among outcomes. The medial OFC reduces these contrasting representations of value to a single dimension, a common currency, in order to compare alternative choices. © 2011 New York Academy of Sciences.
Quantity, Revisited: An Object-Oriented Reusable Class
NASA Technical Reports Server (NTRS)
Funston, Monica Gayle; Gerstle, Walter; Panthaki, Malcolm
1998-01-01
"Quantity", a prototype implementation of an object-oriented class, was developed for two reasons: to help engineers and scientists manipulate the many types of quantities encountered during routine analysis, and to create a reusable software component to for large domain-specific applications. From being used as a stand-alone application to being incorporated into an existing computational mechanics toolkit, "Quantity" appears to be a useful and powerful object. "Quantity" has been designed to maintain the full engineering meaning of values with respect to units and coordinate systems. A value is a scalar, vector, tensor, or matrix, each of which is composed of Value Components, each of which may be an integer, floating point number, fuzzy number, etc., and its associated physical unit. Operations such as coordinate transformation and arithmetic operations are handled by member functions of "Quantity". The prototype has successfully tested such characteristics as maintaining a numeric value, an associated unit, and an annotation. In this paper we further explore the design of "Quantity", with particular attention to coordinate systems.
2010-01-01
Background Irregularly shaped spatial clusters are difficult to delineate. A cluster found by an algorithm often spreads through large portions of the map, impacting its geographical meaning. Penalized likelihood methods for Kulldorff's spatial scan statistics have been used to control the excessive freedom of the shape of clusters. Penalty functions based on cluster geometry and non-connectivity have been proposed recently. Another approach involves the use of a multi-objective algorithm to maximize two objectives: the spatial scan statistics and the geometric penalty function. Results & Discussion We present a novel scan statistic algorithm employing a function based on the graph topology to penalize the presence of under-populated disconnection nodes in candidate clusters, the disconnection nodes cohesion function. A disconnection node is defined as a region within a cluster, such that its removal disconnects the cluster. By applying this function, the most geographically meaningful clusters are sifted through the immense set of possible irregularly shaped candidate cluster solutions. To evaluate the statistical significance of solutions for multi-objective scans, a statistical approach based on the concept of attainment function is used. In this paper we compared different penalized likelihoods employing the geometric and non-connectivity regularity functions and the novel disconnection nodes cohesion function. We also build multi-objective scans using those three functions and compare them with the previous penalized likelihood scans. An application is presented using comprehensive state-wide data for Chagas' disease in puerperal women in Minas Gerais state, Brazil. Conclusions We show that, compared to the other single-objective algorithms, multi-objective scans present better performance, regarding power, sensitivity and positive predicted value. The multi-objective non-connectivity scan is faster and better suited for the detection of moderately irregularly shaped clusters. The multi-objective cohesion scan is most effective for the detection of highly irregularly shaped clusters. PMID:21034451
Johnson, Fred A.; Jensen, Gitte H.; Madsen, Jesper; Williams, Byron K.
2014-01-01
We explored the application of dynamic-optimization methods to the problem of pink-footed goose (Anser brachyrhynchus) management in western Europe. We were especially concerned with the extent to which uncertainty in population dynamics influenced an optimal management strategy, the gain in management performance that could be expected if uncertainty could be eliminated or reduced, and whether an adaptive or robust management strategy might be most appropriate in the face of uncertainty. We combined three alternative survival models with three alternative reproductive models to form a set of nine annual-cycle models for pink-footed geese. These models represent a wide range of possibilities concerning the extent to which demographic rates are density dependent or independent, and the extent to which they are influenced by spring temperatures. We calculated state-dependent harvest strategies for these models using stochastic dynamic programming and an objective function that maximized sustainable harvest, subject to a constraint on desired population size. As expected, attaining the largest mean objective value (i.e., the relative measure of management performance) depended on the ability to match a model-dependent optimal strategy with its generating model of population dynamics. The nine models suggested widely varying objective values regardless of the harvest strategy, with the density-independent models generally producing higher objective values than models with density-dependent survival. In the face of uncertainty as to which of the nine models is most appropriate, the optimal strategy assuming that both survival and reproduction were a function of goose abundance and spring temperatures maximized the expected minimum objective value (i.e., maxi–min). In contrast, the optimal strategy assuming equal model weights minimized the expected maximum loss in objective value. The expected value of eliminating model uncertainty was an increase in objective value of only 3.0%. This value represents the difference between the best that could be expected if the most appropriate model were known and the best that could be expected in the face of model uncertainty. The value of eliminating uncertainty about the survival process was substantially higher than that associated with the reproductive process, which is consistent with evidence that variation in survival is more important than variation in reproduction in relatively long-lived avian species. Comparing the expected objective value if the most appropriate model were known with that of the maxi–min robust strategy, we found the value of eliminating uncertainty to be an expected increase of 6.2% in objective value. This result underscores the conservatism of the maxi–min rule and suggests that risk-neutral managers would prefer the optimal strategy that maximizes expected value, which is also the strategy that is expected to minimize the maximum loss (i.e., a strategy based on equal model weights). The low value of information calculated for pink-footed geese suggests that a robust strategy (i.e., one in which no learning is anticipated) could be as nearly effective as an adaptive one (i.e., a strategy in which the relative credibility of models is assessed through time). Of course, an alternative explanation for the low value of information is that the set of population models we considered was too narrow to represent key uncertainties in population dynamics. Yet we know that questions about the presence of density dependence must be central to the development of a sustainable harvest strategy. And while there are potentially many environmental covariates that could help explain variation in survival or reproduction, our admission of models in which vital rates are drawn randomly from reasonable distributions represents a worst-case scenario for management. We suspect that much of the value of the various harvest strategies we calculated is derived from the fact that they are state dependent, such that appropriate harvest rates depend on population abundance and weather conditions, as well as our focus on an infinite time horizon for sustainability.
Your Money and the Federal Reserve System.
ERIC Educational Resources Information Center
Federal Reserve Bank of Minneapolis, Minn.
The booklet explores various roles which money has played throughout history and examines the relationship between money and the Federal Reserve System. The major objective is to increase understanding of the performance of various functions such as making money work as a medium of exchange and as a measure of value and of storing value for future…
Probing the Donor and Acceptor Substrate Specificity of the Gamma-Glutamyl Transpeptidase
2012-01-17
glutathione can function as a source of cysteine. Mutant strains of F. tularensis that lack functional GGT have been shown to have impaired intracellular...conservation of structure and function between human and bacterial GGT homologues, significant differences in acceptor substrate and inhibitor preferences are...with the lowest value of MODELER objective function . The three-dimensional (3D) fold of the generated models was verified with PROSA II,40 and
NASA Astrophysics Data System (ADS)
Croke, B. F.
2008-12-01
The role of performance indicators is to give an accurate indication of the fit between a model and the system being modelled. As all measurements have an associated uncertainty (determining the significance that should be given to the measurement), performance indicators should take into account uncertainties in the observed quantities being modelled as well as in the model predictions (due to uncertainties in inputs, model parameters and model structure). In the presence of significant uncertainty in observed and modelled output of a system, failure to adequately account for variations in the uncertainties means that the objective function only gives a measure of how well the model fits the observations, not how well the model fits the system being modelled. Since in most cases, the interest lies in fitting the system response, it is vital that the objective function(s) be designed to account for these uncertainties. Most objective functions (e.g. those based on the sum of squared residuals) assume homoscedastic uncertainties. If model contribution to the variations in residuals can be ignored, then transformations (e.g. Box-Cox) can be used to remove (or at least significantly reduce) heteroscedasticity. An alternative which is more generally applicable is to explicitly represent the uncertainties in the observed and modelled values in the objective function. Previous work on this topic addressed the modifications to standard objective functions (Nash-Sutcliffe efficiency, RMSE, chi- squared, coefficient of determination) using the optimal weighted averaging approach. This paper extends this previous work; addressing the issue of serial correlation. A form for an objective function that includes serial correlation will be presented, and the impact on model fit discussed.
Retinal vessel enhancement based on the Gaussian function and image fusion
NASA Astrophysics Data System (ADS)
Moraru, Luminita; Obreja, Cristian Dragoş
2017-01-01
The Gaussian function is essential in the construction of the Frangi and COSFIRE (combination of shifted filter responses) filters. The connection of the broken vessels and an accurate extraction of the vascular structure is the main goal of this study. Thus, the outcome of the Frangi and COSFIRE edge detection algorithms are fused using the Dempster-Shafer algorithm with the aim to improve detection and to enhance the retinal vascular structure. For objective results, the average diameters of the retinal vessels provided by Frangi, COSFIRE and Dempster-Shafer fusion algorithms are measured. These experimental values are compared to the ground truth values provided by manually segmented retinal images. We prove the superiority of the fusion algorithm in terms of image quality by using the figure of merit objective metric that correlates the effects of all post-processing techniques.
1978-09-01
interaction with other persons and which relate him to various subsets within the domain with varying degrees of positive or negative affect[31:300...and downs of the homeostatic functioning of an organism or with small changes in stimulus conditions. 3. Attitudes imply a relationship between a person ...and objects. These objects may be other persons , groups, institutions, inert physical objects, values, social issues, or ideologies. 4. The
Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang
2015-01-01
It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence—with at most a linear convergence rate—because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. PMID:26381742
Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang
2015-01-01
It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence--with at most a linear convergence rate--because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method.
[Multifocal visual electrophysiology in visual function evaluation].
Peng, Shu-Ya; Chen, Jie-Min; Liu, Rui-Jue; Zhou, Shu; Liu, Dong-Mei; Xia, Wen-Tao
2013-08-01
Multifocal visual electrophysiology, consisting of multifocal electroretinography (mfERG) and multifocal visual evoked potential (mfVEP), can objectively evaluate retina function and retina-cortical conduction pathway status by stimulating many local retinal regions and obtaining each local response simultaneously. Having many advantages such as short testing time and high sensitivity, it has been widely used in clinical ophthalmology, especially in the diagnosis of retinal disease and glaucoma. It is a new objective technique in clinical forensic medicine involving visual function evaluation of ocular trauma in particular. This article summarizes the way of stimulation, the position of electrodes, the way of analysis, the visual function evaluation of mfERG and mfVEP, and discussed the value of multifocal visual electrophysiology in forensic medicine.
A Model of Object-Identities and Values
1990-02-23
integrity constraints in its construct, which provides the natural integration of the logical database model and the object-oriented database model. 20...portions are integrated by a simple commutative diagram of modeling functions. The formalism includes the expression of integrity constraints in its ...38 .5.2.2 The (Concept Model and Its Semantics .. .. .. .. ... .... ... .. 40 5.2.3 Two K%.inds of Predicates
Interval-Valued Rank in Finite Ordered Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff; Pogel, Alex; Purvine, Emilie
We consider the concept of rank as a measure of the vertical levels and positions of elements of partially ordered sets (posets). We are motivated by the need for algorithmic measures on large, real-world hierarchically-structured data objects like the semantic hierarchies of ontolog- ical databases. These rarely satisfy the strong property of gradedness, which is required for traditional rank functions to exist. Representing such semantic hierarchies as finite, bounded posets, we recognize the duality of ordered structures to motivate rank functions which respect verticality both from the bottom and from the top. Our rank functions are thus interval-valued, and alwaysmore » exist, even for non-graded posets, providing order homomorphisms to an interval order on the interval-valued ranks. The concept of rank width arises naturally, allowing us to identify the poset region with point-valued width as its longest graded portion (which we call the “spindle”). A standard interval rank function is naturally motivated both in terms of its extremality and on pragmatic grounds. Its properties are examined, including the relation- ship to traditional grading and rank functions, and methods to assess comparisons of standard interval-valued ranks.« less
Economic selection index development for Beefmaster cattle I: Terminal breeding objective.
Ochsner, K P; MacNeil, M D; Lewis, R M; Spangler, M L
2017-03-01
The objective of this study was to develop an economic selection index for Beefmaster cattle in a terminal production system where bulls are mated to mature cows with all resulting progeny harvested. National average prices from 2010 to 2014 were used to establish income and expenses for the system. Phenotypic and genetic parameter values among the selection criteria and goal traits were obtained from literature. Economic values were estimated by simulating 100,000 animals and approximating the partial derivatives of the profit function by perturbing traits one at a time, by 1 unit, while holding the other traits constant at their respective means. Relative economic values (REV) for the terminal objective traits HCW, marbling score (MS), ribeye area (REA), 12th-rib fat (FAT), and feed intake (FI) were 91.29, 17.01, 8.38, -7.07, and -29.66, respectively. Consequently, improving the efficiency of beef production is expected to impact profitability greater than improving carcass merit alone. The accuracy of the index lies between 0.338 (phenotypic selection) and 0.503 (breeding values known without error). The application of this index would aid Beefmaster breeders in their sire selection decisions, facilitating genetic improvement for a terminal breeding objective.
Pant, Anup D; Dorairaj, Syril K; Amini, Rouzbeh
2018-07-01
Quantifying the mechanical properties of the iris is important, as it provides insight into the pathophysiology of glaucoma. Recent ex vivo studies have shown that the mechanical properties of the iris are different in glaucomatous eyes as compared to normal ones. Notwithstanding the importance of the ex vivo studies, such measurements are severely limited for diagnosis and preclude development of treatment strategies. With the advent of detailed imaging modalities, it is possible to determine the in vivo mechanical properties using inverse finite element (FE) modeling. An inverse modeling approach requires an appropriate objective function for reliable estimation of parameters. In the case of the iris, numerous measurements such as iris chord length (CL) and iris concavity (CV) are made routinely in clinical practice. In this study, we have evaluated five different objective functions chosen based on the iris biometrics (in the presence and absence of clinical measurement errors) to determine the appropriate criterion for inverse modeling. Our results showed that in the absence of experimental measurement error, a combination of iris CL and CV can be used as the objective function. However, with the addition of measurement errors, the objective functions that employ a large number of local displacement values provide more reliable outcomes.
A risk-based multi-objective model for optimal placement of sensors in water distribution system
NASA Astrophysics Data System (ADS)
Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein
2018-02-01
In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.
NASA Astrophysics Data System (ADS)
Castiglioni, S.; Toth, E.
2009-04-01
In the calibration procedure of continuously-simulating models, the hydrologist has to choose which part of the observed hydrograph is most important to fit, either implicitly, through the visual agreement in manual calibration, or explicitly, through the choice of the objective function(s). Changing the objective functions it is in fact possible to emphasise different kind of errors, giving them more weight in the calibration phase. The objective functions used for calibrating hydrological models are generally of the quadratic type (mean squared error, correlation coefficient, coefficient of determination, etc) and are therefore oversensitive to high and extreme error values, that typically correspond to high and extreme streamflow values. This is appropriate when, like in the majority of streamflow forecasting applications, the focus is on the ability to reproduce potentially dangerous flood events; on the contrary, if the aim of the modelling is the reproduction of low and average flows, as it is the case in water resource management problems, this may result in a deterioration of the forecasting performance. This contribution presents the results of a series of automatic calibration experiments of a continuously-simulating rainfall-runoff model applied over several real-world case-studies, where the objective function is chosen so to highlight the fit of average and low flows. In this work a simple conceptual model will be used, of the lumped type, with a relatively low number of parameters to be calibrated. The experiments will be carried out for a set of case-study watersheds in Central Italy, covering an extremely wide range of geo-morphologic conditions and for whom at least five years of contemporary daily series of streamflow, precipitation and evapotranspiration estimates are available. Different objective functions will be tested in calibration and the results will be compared, over validation data, against those obtained with traditional squared functions. A companion work presents the results, over the same case-study watersheds and observation periods, of a system-theoretic model, again calibrated for reproducing average and low streamflows.
The value of identity: olfactory notes on orbitofrontal cortex function.
Gottfried, Jay A; Zelano, Christina
2011-12-01
Neuroscientific research has emphatically promoted the idea that the key function of the orbitofrontal cortex (OFC) is to encode value. Associative learning studies indicate that OFC representations of stimulus cues reflect the predictive value of expected outcomes. Neuroeconomic studies suggest that the OFC distills abstract representations of value from discrete commodities to optimize choice. Although value-based models provide good explanatory power for many different findings, these models are typically disconnected from the very stimuli and commodities giving rise to those value representations. Little provision is made, either theoretically or empirically, for the necessary cooperative role of object identity, without which value becomes orphaned from its source. As a step toward remediating the value of identity, this review provides a focused olfactory survey of OFC research, including new work from our lab, to highlight the elemental involvement of this region in stimulus-specific predictive coding of both perceptual outcomes and expected values. © 2011 New York Academy of Sciences.
The Impact of Inattention and Emotional Problems on Cognitive Control in Primary School Children
ERIC Educational Resources Information Center
Sorensen, Lin; Plessen, Kerstin J.; Lundervold, Astri J.
2012-01-01
Objective: The present study investigated the predictive value of parent/teacher reports of inattention and emotional problems on cognitive control function in 241 children in primary school. Method: Cognitive control was measured by functions of set-shifting and working memory as assessed by the Behavior Rating Inventory of Executive Function…
ERIC Educational Resources Information Center
Rice, Frances; Lifford, Kate J.; Thomas, Hollie V.; Thapar, Anita
2007-01-01
Objective: To assess the value of maternal and self-ratings of adolescent depression by investigating the extent to which these reports predicted a range of mental health and functional outcomes 4 years later. The potential influence of mother's own depressed mood on her ratings of adolescent depression and suicidal ideation on adolescent outcome…
Utility of coupling nonlinear optimization methods with numerical modeling software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, M.J.
1996-08-05
Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less
Technical note: An approach to derive breeding goals from the preferences of decision makers.
Alfonso, L
2016-11-01
This paper deals with the use of the Choquet integral to identify breeding objectives and construct an aggregate genotype. The Choquet integral can be interpreted as an extension of the aggregate genotype based on profit equations, substituting the vector of economic weights by a monotone function, called capacity, which allows the aggregation of traits based, for instance, on the preferences of decision makers. It allows the aggregation of traits with or without economic value, taking into account not only the importance of the breeding value of each trait but also the interaction among them. Two examples have been worked out for pig and dairy cattle breeding scenarios to illustrate its application. It is shown that the expression of stakeholders' or decision makers' preferences, as a single ranking of animals or groups of animals, could be sufficient to extract information to derive breeding objectives. It is also shown that coalitions among traits can be identified to evaluate whether a linear additive function, equivalent of the Hazel aggregate genotype where economic values are replaced by Shapley values, could be adequate to define the net merit of breeding animals.
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
Optimization of cutting parameters for machining time in turning process
NASA Astrophysics Data System (ADS)
Mavliutov, A. R.; Zlotnikov, E. G.
2018-03-01
This paper describes the most effective methods for nonlinear constraint optimization of cutting parameters in the turning process. Among them are Linearization Programming Method with Dual-Simplex algorithm, Interior Point method, and Augmented Lagrangian Genetic Algorithm (ALGA). Every each of them is tested on an actual example – the minimization of production rate in turning process. The computation was conducted in the MATLAB environment. The comparative results obtained from the application of these methods show: The optimal value of the linearized objective and the original function are the same. ALGA gives sufficiently accurate values, however, when the algorithm uses the Hybrid function with Interior Point algorithm, the resulted values have the maximal accuracy.
Proceedings of the International Symposium on Optimum Structural Design,
1981-01-01
of Linear Programs) mehods . It ( uhoular be also noted that posynomialr approx-CU 2 ri nations can be constructed. When both the if objective function...same range establish the value of t. The maxi- The former value ensures little risk of rejecting a mum absolute value projected for the stress resultant...primary con- any of the external loadings from Eq. (13) is used to straints with more risk of rejecting a significant one. determine the side constraint
Appearance-based face recognition and light-fields.
Gross, Ralph; Matthews, Iain; Baker, Simon
2004-04-01
Arguably the most important decision to be made when developing an object recognition algorithm is selecting the scene measurements or features on which to base the algorithm. In appearance-based object recognition, the features are chosen to be the pixel intensity values in an image of the object. These pixel intensities correspond directly to the radiance of light emitted from the object along certain rays in space. The set of all such radiance values over all possible rays is known as the plenoptic function or light-field. In this paper, we develop a theory of appearance-based object recognition from light-fields. This theory leads directly to an algorithm for face recognition across pose that uses as many images of the face as are available, from one upwards. All of the pixels, whichever image they come from, are treated equally and used to estimate the (eigen) light-field of the object. The eigen light-field is then used as the set of features on which to base recognition, analogously to how the pixel intensities are used in appearance-based face and object recognition.
Identifying multiple influential spreaders based on generalized closeness centrality
NASA Astrophysics Data System (ADS)
Liu, Huan-Li; Ma, Chuang; Xiang, Bing-Bing; Tang, Ming; Zhang, Hai-Feng
2018-02-01
To maximize the spreading influence of multiple spreaders in complex networks, one important fact cannot be ignored: the multiple spreaders should be dispersively distributed in networks, which can effectively reduce the redundance of information spreading. For this purpose, we define a generalized closeness centrality (GCC) index by generalizing the closeness centrality index to a set of nodes. The problem converts to how to identify multiple spreaders such that an objective function has the minimal value. By comparing with the K-means clustering algorithm, we find that the optimization problem is very similar to the problem of minimizing the objective function in the K-means method. Therefore, how to find multiple nodes with the highest GCC value can be approximately solved by the K-means method. Two typical transmission dynamics-epidemic spreading process and rumor spreading process are implemented in real networks to verify the good performance of our proposed method.
Methods, systems and devices for detecting and locating ferromagnetic objects
Roybal, Lyle Gene [Idaho Falls, ID; Kotter, Dale Kent [Shelley, ID; Rohrbaugh, David Thomas [Idaho Falls, ID; Spencer, David Frazer [Idaho Falls, ID
2010-01-26
Methods for detecting and locating ferromagnetic objects in a security screening system. One method includes a step of acquiring magnetic data that includes magnetic field gradients detected during a period of time. Another step includes representing the magnetic data as a function of the period of time. Another step includes converting the magnetic data to being represented as a function of frequency. Another method includes a step of sensing a magnetic field for a period of time. Another step includes detecting a gradient within the magnetic field during the period of time. Another step includes identifying a peak value of the gradient detected during the period of time. Another step includes identifying a portion of time within the period of time that represents when the peak value occurs. Another step includes configuring the portion of time over the period of time to represent a ratio.
On global optimization using an estimate of Lipschitz constant and simplicial partition
NASA Astrophysics Data System (ADS)
Gimbutas, Albertas; Žilinskas, Antanas
2016-10-01
A new algorithm is proposed for finding the global minimum of a multi-variate black-box Lipschitz function with an unknown Lipschitz constant. The feasible region is initially partitioned into simplices; in the subsequent iteration, the most suitable simplices are selected and bisected via the middle point of the longest edge. The suitability of a simplex for bisection is evaluated by minimizing of a surrogate function which mimics the lower bound for the considered objective function over that simplex. The surrogate function is defined using an estimate of the Lipschitz constant and the objective function values at the vertices of a simplex. The novelty of the algorithm is the sophisticated method of estimating the Lipschitz constant, and the appropriate method to minimize the surrogate function. The proposed algorithm was tested using 600 random test problems of different complexity, showing competitive results with two popular advanced algorithms which are based on similar assumptions.
Micro-Macro Duality and Space-Time Emergence
NASA Astrophysics Data System (ADS)
Ojima, Izumi
2011-03-01
The microscopic origin of space-time geometry is explained on the basis of an emergence process associated with the condensation of infinite number of microscopic quanta responsible for symmetry breakdown, which implements the basic essence of "Quantum-Classical Correspondence" and of the forcing method in physical and mathematical contexts, respectively. From this viewpoint, the space-time dependence of physical quantities arises from the "logical extension" [8] to change "constant objects" into "variable objects" by tagging the order parameters associated with the condensation onto "constant objects"; the logical direction here from a value y to a domain variable x (to materialize the basic mechanism behind the Gel'fand isomorphism) is just opposite to that common in the usual definition of a function ƒ : x⟼ƒ(x) from its domain variable x to a value y = ƒ(x).
ERIC Educational Resources Information Center
Verhoeven, Clara L.; Schepers, Vera P.; Post, Marcel W.; van Heugten, Caroline M.
2011-01-01
The objective of this study was to investigate the value of screening for cognitive functions at the start of an inpatient rehabilitation programme to predict the health status 1 and 3 years poststroke. In this longitudinal cohort study of stroke patients in inpatient rehabilitation data of 134 participants were analysed. Cognitive and clinical…
Kwak, Seung-Jun; Yoo, Seung-Hoon; Shin, Chol-Oh
2002-02-01
Evaluating environmental impacts has become an increasingly vital part of environmental management. In the present study, a methodological procedure based on multiattribute utility theory (MAUT) has been applied to obtain a decision-maker's value index on assessment of the environmental impacts. The paper begins with an overview of MAUT. Next, we elicited strategic objectives and several important attributes, and then structured them into a hierarchy, with the aim of structuring and quantifying the basic values for the assessment. An environmental multiattribute index is constructed as a multiattribute utility function, based on value judgements provided by a decision-maker at the Korean Ministry of Environment (MOE). The implications of the results are useful for many aspects of MOE's environmental policies; identifying the strategic objectives and basic values; facilitating communication about the organization's priorities; and recognizing decision opportunities that face decision-makers of Korea.
Functional ankle control of rock climbers
Schweizer, A; Bircher, H; Kaelin, X; Ochsner, P
2005-01-01
Objective: To evaluate whether rock climbing type exercise would be of value in rehabilitating ankle injuries to improve ankle stability and coordination. Results: The rock climbers showed significantly better results in the stabilometry and greater absolute and relative maximum strength of flexion in the ankle. The soccer players showed greater absolute but not relative strength in extension. Conclusion: Rock climbing, because of its slow and controlled near static movements, may be of value in the treatment of functional ankle instability. However, it has still to be confirmed whether it is superior to the usual rehabilitation exercises such as use of the wobble board. PMID:15976164
Calus, Mario PL; Bijma, Piter; Veerkamp, Roel F
2004-01-01
Covariance functions have been proposed to predict breeding values and genetic (co)variances as a function of phenotypic within herd-year averages (environmental parameters) to include genotype by environment interaction. The objective of this paper was to investigate the influence of definition of environmental parameters and non-random use of sires on expected breeding values and estimated genetic variances across environments. Breeding values were simulated as a linear function of simulated herd effects. The definition of environmental parameters hardly influenced the results. In situations with random use of sires, estimated genetic correlations between the trait expressed in different environments were 0.93, 0.93 and 0.97 while simulated at 0.89 and estimated genetic variances deviated up to 30% from the simulated values. Non random use of sires, poor genetic connectedness and small herd size had a large impact on the estimated covariance functions, expected breeding values and calculated environmental parameters. Estimated genetic correlations between a trait expressed in different environments were biased upwards and breeding values were more biased when genetic connectedness became poorer and herd composition more diverse. The best possible solution at this stage is to use environmental parameters combining large numbers of animals per herd, while losing some information on genotype by environment interaction in the data. PMID:15339629
Proposed method to construct Boolean functions with maximum possible annihilator immunity
NASA Astrophysics Data System (ADS)
Goyal, Rajni; Panigrahi, Anupama; Bansal, Rohit
2017-07-01
Nonlinearity and Algebraic(annihilator) immunity are two core properties of a Boolean function because optimum values of Annihilator Immunity and nonlinearity are required to resist fast algebraic attack and differential cryptanalysis respectively. For a secure cypher system, Boolean function(S-Boxes) should resist maximum number of attacks. It is possible if a Boolean function has optimal trade-off among its properties. Before constructing Boolean functions, we fixed the criteria of our constructions based on its properties. In present work, our construction is based on annihilator immunity and nonlinearity. While keeping above facts in mind,, we have developed a multi-objective evolutionary approach based on NSGA-II and got the optimum value of annihilator immunity with good bound of nonlinearity. We have constructed balanced Boolean functions having the best trade-off among balancedness, Annihilator immunity and nonlinearity for 5, 6 and 7 variables by the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less
Behavior learning in differential games and reorientation maneuvers
NASA Astrophysics Data System (ADS)
Satak, Neha
The purpose of this dissertation is to apply behavior learning concepts to incomplete- information continuous time games. Realistic game scenarios are often incomplete-information games in which the players withhold information. A player may not know its opponent's objectives and strategies prior to the start of the game. This lack of information can limit the player's ability to play optimally. If the player can observe the opponent's actions, it can better optimize its achievements by taking corrective actions. In this research, a framework to learn an opponent's behavior and take corrective actions is developed. The framework will allow a player to observe the opponent's actions and formulate behavior models. The developed behavior model can then be utilized to find the best actions for the player that optimizes the player's objective function. In addition, the framework proposes that the player plays a safe strategy at the beginning of the game. A safe strategy is defined in this research as a strategy that guarantees a minimum pay-off to the player independent of the other player's actions. During the initial part of the game, the player will play the safe strategy until it learns the opponent's behavior. Two methods to develop behavior models that differ in the formulation of the behavior model are proposed. The first method is the Cost-Strategy Recognition (CSR) method in which the player formulates an objective function and a strategy for the opponent. The opponent is presumed to be rational and therefore will play to optimize its objective function. The strategy of the opponent is dependent on the information available to the opponent about other players in the game. A strategy formulation presumes a certain level of information available to the opponent. The previous observations of the opponent's actions are used to estimate the parameters of the formulated behavior model. The estimated behavior model predicts the opponent's future actions. The second method is the Direct Approximation of Value Function (DAVF) method. In this method, unlike the CSR method, the player formulates an objective function for the opponent but does not formulates a strategy directly; rather, indirectly the player assumes that the opponent is playing optimally. Thus, a value function satisfying the HJB equation corresponding to the opponent's cost function exists. The DAVF method finds an approximate solution for the value function based on previous observations of the opponent's control. The approximate solution to the value function is then used to predict the opponent's future behavior. Game examples in which only a single player is learning its opponent's behavior are simulated. Subsequently, examples in which both players in a two-player game are learning each other's behavior are simulated. In the second part of this research, a reorientation control maneuver for a spinning spacecraft will be developed. This will aid the application of behavior learning and differential games concepts to the specific scenario involving multiple spinning spacecraft. An impulsive reorientation maneuver with coasting will be analytically designed to reorient the spin axis of the spacecraft using a single body fixed thruster. Cooperative maneuvers of multiple spacecraft optimizing fuel and relative orientation will be designed. Pareto optimality concepts will be used to arrive at mutually agreeable reorientation maneuvers for the cooperating spinning spacecraft.
NASA Technical Reports Server (NTRS)
Piccinotti, G.; Mushotzky, R. F.; Boldt, E. A.; Holt, S. S.; Marshall, F. E.; Serlemitsos, P. J.; Shafer, R. A.
1981-01-01
An experiment was performed in which a complete X-ray survey of the 8.2 steradians of the sky at galactic latitudes where the absolute value of b is 20 deg down to a limiting sensitivity of 3.1 x ten to the minus 11th power ergs/sq cm sec in the 2-10 keV band. Of the 85 detected sources 17 were identified with galactic objects, 61 were identified with extragalactic objects, and 7 remain unidentified. The log N - log S relation for the non-galactic objects is well fit by the Euclidean relationship. The X-ray spectra of these objects were used to construct log N - log S in physical units. The complete sample of identified sources was used to construct X-ray luminosity functions, using the absolute maximum likelihood method, for clusters galaxies and active galactic nuclei.
Scheduling on the basis of the research of dependences among the construction process parameters
NASA Astrophysics Data System (ADS)
Romanovich, Marina; Ermakov, Alexander; Mukhamedzhanova, Olga
2017-10-01
The dependences among the construction process parameters are investigated in the article: average integrated value of qualification of the shift, number of workers per shift and average daily amount of completed work on the basis of correlation coefficient are considered. Basic data for the research of dependences among the above-stated parameters have been collected during the construction of two standard objects A and B (monolithic houses), in four months of construction (October, November, December, January). Kobb-Douglas production function has proved the values of coefficients of correlation close to 1. Function is simple to be used and is ideal for the description of the considered dependences. The development function, describing communication among the considered parameters of the construction process, is developed. The function of the development gives the chance to select optimum quantitative and qualitative (qualification) structure of the brigade link for the work during the next period of time, according to a preset value of amount of works. Function of the optimized amounts of works, which reflects interrelation of key parameters of construction process, is developed. Values of function of the optimized amounts of works should be used as the average standard for scheduling of the storming periods of construction.
A method for measuring quality of life through subjective weighting of functional status.
Stineman, Margaret G; Wechsler, Barbara; Ross, Richard; Maislin, Greg
2003-04-01
To apply a new tool to understand the quality of life (QOL) implications of patients' functional status. Results from the Features-Resource Trade-Off Game were used to form utility weights by ranking functional activities by the relative value of achieving independence in each activity compared with all other component activities. The utility weights were combined with patients' actual levels of performance across the same activities to produce QOL-weighted functional status scores and to form "value rulers" to order activities by perceived importance. Persons with severe disabilities living in the community and clinicians practicing in various rehabilitation disciplines. Two panels of 5 consumers with disabilities and 2 panels of 5 rehabilitation clinicians. The 4 panels played the Features Resource Trade-Off Game by using the FIMT(TM) instrument definitions. Utility weights for each of the 18 FIM items, QOL-weighted FIM scores, and value rulers. All 4 panels valued the achievement of independence in cognitive and communication activities more than independence in physical activities. Consequently, the unweighted FIM scores of patients who have severe physical disabilities but relatively intact cognitive skills will underestimate QOL, while inflating QOL in those with low levels of independence in cognition and communication but higher physical function. Independence in some activities is more valued than in others; thus, 2 people with the same numeric functional status score could experience very different QOL. QOL-weighted functional status scores translate objectively measured functional status into its subjective meaning. This new technology for measuring subjective function-related QOL has a variety of applications to clinical, educational, and research practices.
NASA Astrophysics Data System (ADS)
Metternicht, Graciela; Blanco, Paula; del Valle, Hector; Laterra, Pedro; Hardtke, Leonardo; Bouza, Pablo
2015-04-01
Wildlife is part of the Patagonian rangelands sheep farming environment, with the potential of providing extra revenue to livestock owners. As sheep farming became less profitable, farmers and ranchers could focus on sustainable wildlife harvesting. It has been argued that sustainable wildlife harvesting is ecologically one of the most rational forms of land use because of its potential to provide multiple products of high value, while reducing pressure on ecosystems. The guanaco (Lama guanicoe) is the most conspicuous wild ungulate of Patagonia. Guanaco ?bre, meat, pelts and hides are economically valuable and have the potential to be used within the present Patagonian context of production systems. Guanaco populations in South America, including Patagonia, have experienced a sustained decline. Causes for this decline are related to habitat alteration, competition for forage with sheep, and lack of reasonable management plans to develop livelihoods for ranchers. In this study we propose an approach to explicitly determinate optimal stocking rates based on trade-offs between guanaco density and livestock grazing intensity on rangelands. The focus of our research is on finding optimal sheep stocking rates at paddock level, to ensure the highest production outputs while: a) meeting requirements of sustainable conservation of guanacos over their minimum viable population; b) maximizing soil carbon sequestration, and c) minimizing soil erosion. In this way, determination of optimal stocking rate in rangelands becomes a multi-objective optimization problem that can be addressed using a Fuzzy Multi-Objective Linear Programming (MOLP) approach. Basically, this approach converts multi-objective problems into single-objective optimizations, by introducing a set of objective weights. Objectives are represented using fuzzy set theory and fuzzy memberships, enabling each objective function to adopt a value between 0 and 1. Each objective function indicates the satisfaction of the decision maker towards the respective objective. Fuzzy logic is closer to intuitive thinking used by decision makers, making it a user-friendly approach for them to select alternatives. The proposed approach was applied in a study area of approximately 40,000 hectares in semiarid Patagonian rangelands where extensive, continuous sheep grazing for wool production is the main land use. Multi- and hyper-spectral data were combined with ancillary data within a GIS environment, and used to derive maps of forage production, guanacos density, soil organic carbon and soil erosion. Different scenarios, with different objectives weights were evaluated. Results showed that under scenario 1, where livestock production is predicted to have the highest values, guanaco numbers decrease substantially as well as soil carbon sequestration, and soil erosion exhibit the highest values. On the other hand, when guanaco population is prioritized, livestock production has the lowest value. A compromise alternative resulted from a scenario where variables are assigned same weight; under this condition, high livestock production is predicted, while conservation of guanaco population is sustainable, carbon sequestration is maximized and soil erosion minimized.
The value-added laboratory: an opportunity to merge research and service objectives.
McDonald, J M
1997-01-01
The changing health-care environment is creating a new opportunities for laboratory medicine professionals that correspond with the new health services research agendas. Proving cost-effectiveness and conducting outcomes assessment are becoming vital functions of laboratories in this era of managed care. Laboratorians must take advantage of the resulting opportunities to show how they add value and medical relevance to the health-care delivery system.
Robust Sensitivity Analysis of Courses of Action Using an Additive Value Model
2008-03-01
According to Clemen , sensitivity analysis answers, “What makes a difference in this decision?” (2001:175). Sensitivity analysis can also indicate...alternative to change. These models look for the new weighting that causes a specific alternative to rank above all others. 19 Barron and Schmidt first... Schmidt , 1988:123). A smaller objective function value indicates greater sensitivity. Wolters and Mareschal propose a similar approach using goal
Ahmadian, Mehdi; Dabidi Roshan, Valiollah; Ashourpore, Eadeh
2017-07-04
Taurine is an amino acid found abundantly in the heart in very high concentrations. It is assumed that taurine contributes to several physiological functions of mammalian cells, such as osmoregulation, anti-inflammation, membrane stabilization, ion transport modulation, and regulation of oxidative stress and mitochondrial protein synthesis. The objective of the current study was to evaluate the effectiveness of taurine supplementation on functional capacity, myocardial oxygen consumption, and electrical activity in patients with heart failure. In a double-blind and randomly designed study, 16 patients with heart failure were assigned to two groups: taurine (TG, n = 8) and placebo (PG, n = 8). TG received 500-mg taurine supplementation three times per day for two weeks. Significant decrease in the values of Q-T segments (p < 0.01) and significant increase in the values of P-R segments (p < 0.01) were detected following exercise post-supplementation in TG rather than in PG. Significantly higher values of taurine concentration, T wave, Q-T segment, physical capacities, and lower values of cardiovascular capacities were detected post-supplementation in TG as compared with PG (all p values <0.01). Taurine significantly enhanced the physical function and significantly reduced the cardiovascular function parameters following exercise. Our results also suggest that the short-term taurine supplementation is an effective strategy for improving some selected hemodynamic parameters in heart failure patients. Together, these findings support the view that taurine improves cardiac function and functional capacity in patients with heart failure. This idea warrants further study.
Kessels, Roy P C; Rijken, Stefan; Joosten-Weyn Banningh, Liesbeth W A; Van Schuylenborgh-VAN Es, Nelleke; Olde Rikkert, Marcel G M
2010-01-01
Memory for object locations, as part of spatial memory function, has rarely been studied in patients with Alzheimer dementia (AD), while studies in patients with Mild Cognitive Impairment (MCI) patients are lacking altogether. The present study examined categorical spatial memory function using the Location Learning Test (LLT) in MCI patients (n = 30), AD patients (n = 30), and healthy controls (n = 40). Two scoring methods were compared, aimed at disentangling positional recall (location irrespective of object identity) and object-location binding. The results showed that AD patients performed worse than the MCI patients on the LLT, both on recall of positional information and on recall of the locations of different objects. In addition, both measures could validly discriminate between AD and MCI patients. These findings are in agreement with the notion that visual cued-recall tests may have better diagnostic value than traditional (verbal) free-recall tests in the assessment of patients with suspected MCI or AD.
Knopman, Debra S.; Voss, Clifford I.
1989-01-01
Sampling design for site characterization studies of solute transport in porous media is formulated as a multiobjective problem. Optimal design of a sampling network is a sequential process in which the next phase of sampling is designed on the basis of all available physical knowledge of the system. Three objectives are considered: model discrimination, parameter estimation, and cost minimization. For the first two objectives, physically based measures of the value of information obtained from a set of observations are specified. In model discrimination, value of information of an observation point is measured in terms of the difference in solute concentration predicted by hypothesized models of transport. Points of greatest difference in predictions can contribute the most information to the discriminatory power of a sampling design. Sensitivity of solute concentration to a change in a parameter contributes information on the relative variance of a parameter estimate. Inclusion of points in a sampling design with high sensitivities to parameters tends to reduce variance in parameter estimates. Cost minimization accounts for both the capital cost of well installation and the operating costs of collection and analysis of field samples. Sensitivities, discrimination information, and well installation and sampling costs are used to form coefficients in the multiobjective problem in which the decision variables are binary (zero/one), each corresponding to the selection of an observation point in time and space. The solution to the multiobjective problem is a noninferior set of designs. To gain insight into effective design strategies, a one-dimensional solute transport problem is hypothesized. Then, an approximation of the noninferior set is found by enumerating 120 designs and evaluating objective functions for each of the designs. Trade-offs between pairs of objectives are demonstrated among the models. The value of an objective function for a given design is shown to correspond to the ability of a design to actually meet an objective.
Phasic dopamine signals: from subjective reward value to formal economic utility
Schultz, Wolfram; Carelli, Regina M; Wightman, R Mark
2015-01-01
Although rewards are physical stimuli and objects, their value for survival and reproduction is subjective. The phasic, neurophysiological and voltammetric dopamine reward prediction error response signals subjective reward value. The signal incorporates crucial reward aspects such as amount, probability, type, risk, delay and effort. Differences of dopamine release dynamics with temporal delay and effort in rodents may derive from methodological issues and require further study. Recent designs using concepts and behavioral tools from experimental economics allow to formally characterize the subjective value signal as economic utility and thus to establish a neuronal value function. With these properties, the dopamine response constitutes a utility prediction error signal. PMID:26719853
NASA Astrophysics Data System (ADS)
Aurora, Tarlok
2013-04-01
In introductory physics, students verify Archimedes' principle by immersing an object in water in a container, with a side-spout to collect the displaced water, resulting in a large uncertainty, due to surface tension. A modified procedure was introduced, in which a plastic bucket is suspended from a force sensor, and an object hangs underneath the bucket. The object is immersed in water in a glass beaker (without any side spout), and the weight loss is measured with a computer-controlled force sensor. Instead of collecting the water displaced by the object, tap water was added to the bucket to compensate for the weight loss, and the Archimedes' principle was verified within less than a percent. With this apparatus, buoyant force was easily studied as a function of volume of displaced water; as well as a function of density of saline solution. By graphing buoyant force as a function of volume (or density of liquid), value of g was obtained from slope. Apparatus and sources of error will be discussed.
It's all connected: Pathways in visual object recognition and early noun learning.
Smith, Linda B
2013-11-01
A developmental pathway may be defined as the route, or chain of events, through which a new structure or function forms. For many human behaviors, including object name learning and visual object recognition, these pathways are often complex and multicausal and include unexpected dependencies. This article presents three principles of development that suggest the value of a developmental psychology that explicitly seeks to trace these pathways and uses empirical evidence on developmental dependencies among motor development, action on objects, visual object recognition, and object name learning in 12- to 24-month-old infants to make the case. The article concludes with a consideration of the theoretical implications of this approach. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Fighting for life: Religion and science in the work of fish and wildlife biologists
NASA Astrophysics Data System (ADS)
Geffen, Joel Phillip
Philosophers, historians, and sociologists of science have argued that it is impossible to separate fact from value. Even so, Americans generally demand that scientists be "objective." No bias is permitted in their work. Religious motivations in particular are widely considered anathema within the halls of science. My dissertation addresses both theoretical and practical aspects concerning objectivity in science through an examination of fish and wildlife biologists. I hypothesized that they use the language of objective science as a tool to convince others to protect habitats and species. Further, I claimed that this "rhetoric of science" is employed either consciously or unconsciously on behalf of personal values, and that religious and/or spiritual values figure significantly among these. Regarding the issue's practical applications, I argued in support of Susan Longino's assertion that while subjective influences exist in science, they do not necessarily indicate that objectivity has been sacrificed. My primary methodology is ethnographic. Thirty-five biologists working in the Pacific Northwest were interviewed during the course of summer 2001. Participant ages ranged from 23 to 78. Both genders were represented, as were various ethnic and cultural backgrounds, including Native American. I used a questionnaire to guide respondents through a consistent set of open-ended queries. I organized their answers under four categories: the true, the good, the beautiful, and the holy. The first three were borrowed from the theoretical writings of philosopher Immanuel Kant. The last came from Rudolf Otto's theological work. These categories provided an excellent analytical framework. I found that the great majority of fish and wildlife biologists strive for objectivity. However, they are also informed by powerful contextual values. These are derived from environmental ethics, aesthetic preferences pertaining to ecosystem appearance and function, and visceral experiences of connection with nature. These were blended into their practice of science to varying degrees. My hypothesis was affirmed. Science is not value-free, and nor can it be. Yet, contextual values do not necessarily undermine scientific objectivity.
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright
7 CFR 1467.13 - Modifications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... must meet WRP regulations and program objectives, comply with the definition of wetland restoration as... AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS WETLANDS RESERVE PROGRAM § 1467.13 Modifications. (a... the program so long as the modification will not adversely affect the wetland functions and values for...
The conceptual foundation of environmental decision support.
Reichert, Peter; Langhans, Simone D; Lienert, Judit; Schuwirth, Nele
2015-05-01
Environmental decision support intends to use the best available scientific knowledge to help decision makers find and evaluate management alternatives. The goal of this process is to achieve the best fulfillment of societal objectives. This requires a careful analysis of (i) how scientific knowledge can be represented and quantified, (ii) how societal preferences can be described and elicited, and (iii) how these concepts can best be used to support communication with authorities, politicians, and the public in environmental management. The goal of this paper is to discuss key requirements for a conceptual framework to address these issues and to suggest how these can best be met. We argue that a combination of probability theory and scenario planning with multi-attribute utility theory fulfills these requirements, and discuss adaptations and extensions of these theories to improve their application for supporting environmental decision making. With respect to (i) we suggest the use of intersubjective probabilities, if required extended to imprecise probabilities, to describe the current state of scientific knowledge. To address (ii), we emphasize the importance of value functions, in addition to utilities, to support decisions under risk. We discuss the need for testing "non-standard" value aggregation techniques, the usefulness of flexibility of value functions regarding attribute data availability, the elicitation of value functions for sub-objectives from experts, and the consideration of uncertainty in value and utility elicitation. With respect to (iii), we outline a well-structured procedure for transparent environmental decision support that is based on a clear separation of scientific prediction and societal valuation. We illustrate aspects of the suggested methodology by its application to river management in general and with a small, didactical case study on spatial river rehabilitation prioritization. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Yao, Rui; Templeton, Alistair K; Liao, Yixiang; Turian, Julius V; Kiel, Krystyna D; Chu, James C H
2014-01-01
To validate an in-house optimization program that uses adaptive simulated annealing (ASA) and gradient descent (GD) algorithms and investigate features of physical dose and generalized equivalent uniform dose (gEUD)-based objective functions in high-dose-rate (HDR) brachytherapy for cervical cancer. Eight Syed/Neblett template-based cervical cancer HDR interstitial brachytherapy cases were used for this study. Brachytherapy treatment plans were first generated using inverse planning simulated annealing (IPSA). Using the same dwell positions designated in IPSA, plans were then optimized with both physical dose and gEUD-based objective functions, using both ASA and GD algorithms. Comparisons were made between plans both qualitatively and based on dose-volume parameters, evaluating each optimization method and objective function. A hybrid objective function was also designed and implemented in the in-house program. The ASA plans are higher on bladder V75% and D2cc (p=0.034) and lower on rectum V75% and D2cc (p=0.034) than the IPSA plans. The ASA and GD plans are not significantly different. The gEUD-based plans have higher homogeneity index (p=0.034), lower overdose index (p=0.005), and lower rectum gEUD and normal tissue complication probability (p=0.005) than the physical dose-based plans. The hybrid function can produce a plan with dosimetric parameters between the physical dose-based and gEUD-based plans. The optimized plans with the same objective value and dose-volume histogram could have different dose distributions. Our optimization program based on ASA and GD algorithms is flexible on objective functions, optimization parameters, and can generate optimized plans comparable with IPSA. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Individual preferences modulate incentive values: Evidence from functional MRI
Koeneke, Susan; Pedroni, Andreas F; Dieckmann, Anja; Bosch, Volker; Jäncke, Lutz
2008-01-01
Background In most studies on human reward processing, reward intensity has been manipulated on an objective scale (e.g., varying monetary value). Everyday experience, however, teaches us that objectively equivalent rewards may differ substantially in their subjective incentive values. One factor influencing incentive value in humans is branding. The current study explores the hypothesis that individual brand preferences modulate activity in reward areas similarly to objectively measurable differences in reward intensity. Methods A wheel-of-fortune game comprising an anticipation phase and a subsequent outcome evaluation phase was implemented. Inside a 3 Tesla MRI scanner, 19 participants played for chocolate bars of three different brands that differed in subjective attractiveness. Results Parametrical analysis of the obtained fMRI data demonstrated that the level of activity in anatomically distinct neural networks was linearly associated with the subjective preference hierarchy of the brands played for. During the anticipation phases, preference-dependent neural activity has been registered in premotor areas, insular cortex, orbitofrontal cortex, and in the midbrain. During the outcome phases, neural activity in the caudate nucleus, precuneus, lingual gyrus, cerebellum, and in the pallidum was influenced by individual preference. Conclusion Our results suggest a graded effect of differently preferred brands onto the incentive value of objectively equivalent rewards. Regarding the anticipation phase, the results reflect an intensified state of wanting that facilitates action preparation when the participants play for their favorite brand. This mechanism may underlie approach behavior in real-life choice situations. PMID:19032746
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
Van Rheenen, Tamsyn E; Rossell, Susan L
2014-06-01
People with bipolar disorder (BD) experience significant psychosocial impairment. Understandings of the nature and causes of such impairment is limited by the lack of research exploring the extent to which subjectively reported functioning should be valued as an indicator of objective dysfunction, or examining the relative influence of neurocognition, social cognition and emotion regulation on these important, but different aspects of psychosocial functioning in the context of mania and depression symptoms. This study aimed to address this paucity of research by conducting a comprehensive investigation of psychosocial functioning in a well characterised group of BD patients. Fifty-one BD patients were compared to 52 healthy controls on objectively and subjectively assessed psychosocial outcomes. Relationships between current mood symptoms, psychosocial function and neurocognitive, social cognitive and emotion regulation measures were also examined in the patient group. Patients had significantly worse scores on the global objective and subjective functioning measures relative to controls. In the patient group, although these scores were correlated, regression analyses showed that variance in each of the measures was explained by different predictors. Depressive symptomatology was the most important predictor of global subjective functioning, and neurocognition had a concurrent and important influence with depressive symptoms on objective psychosocial function. Emotion regulation also had an indirect effect on psychosocial functioning via its influence on depressive symptomatology. As this study was cross-sectional in nature, we are unable to draw precise conclusions regarding contributing pathways involved in psychosocial functioning in BD. These results suggest that patients' own evaluations of their subjective functioning represent important indicators of the extent to which their observable function is impaired. They also highlight the importance of incorporating cognitive and emotion regulation assessments into clinical practice when working to reduce psychosocial dysfunction with patients diagnosed with BD. Copyright © 2014 Elsevier B.V. All rights reserved.
Use of thermal neutron reflection method for chemical analysis of bulk samples
NASA Astrophysics Data System (ADS)
Papp, A.; Csikai, J.
2014-09-01
Microscopic, σβ, and macroscopic, Σβ, reflection cross-sections of thermal neutrons averaged over bulk samples as a function of thickness (z) are given. The σβ values are additive even for bulk samples in the z=0.5-8 cm interval and so the σβmol(z) function could be given for hydrogenous substances, including some illicit drugs, explosives and hiding materials of ~1000 cm3 dimensions. The calculated excess counts agree with the measured R(z) values. For the identification of concealed objects and chemical analysis of bulky samples, different neutron methods need to be used simultaneously.
Piecewise convexity of artificial neural networks.
Rister, Blaine; Rubin, Daniel L
2017-10-01
Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chiang, Tzu-An; Che, Z H; Cui, Zhihua
2014-01-01
This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V(Max) method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did.
Chiang, Tzu-An; Che, Z. H.
2014-01-01
This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V Max method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did. PMID:24772026
Religious Study of Katoba Tradition and Its Function in Character Building of Muna Society
NASA Astrophysics Data System (ADS)
Hardin, Hardin; Hermina, Sitti
2018-05-01
This research aims at studying the function of Katoba tradition to the character-building of the society of Muna ethnics. The research was conducted in Kecamatan Lawa, West Muna District. This research includes descriptive-qualitative research. Data collection techniques used are observation, interview, record, and documentation. The object of this study is the speech or advice in the katoba tradition related to character education. From the research results found that the ethnic Muna community has a tradition of katoba in shaping the character of society. The value of the character in question is the formation of respectful values of honour, the value of character and ethical values in the person of the child who has been tried. It expected that this article will be able to inspire and guide people to have a good personality, to behave and to behave wisely to their family and social environment.
Minimal-scan filtered backpropagation algorithms for diffraction tomography.
Pan, X; Anastasio, M A
1999-12-01
The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.
Availability of information on renal function in Dutch community pharmacies.
Koster, Ellen S; Philbert, Daphne; Noordam, Michelle; Winters, Nina A; Blom, Lyda; Bouvy, Marcel L
2016-08-01
Background Early detection and monitoring of impaired renal function may prevent drug related problems. Objective To assess the availability of information on patient's renal function in Dutch community pharmacies, for patients using medication that might need monitoring in case of renal impairment. Methods Per pharmacy, 25 patients aged ≥65 years using at least one drug that requires monitoring, were randomly selected from the pharmacy information system. For these patients, information on renal function [estimated glomerular filtration rate (eGFR)], was obtained from the pharmacy information system. When absent, this information was obtained from the general practitioner (GP). Results Data were collected for 1632 patients. For 1201 patients (74 %) eGFR values were not directly available in the pharmacy, for another 194 patients (12 %) the eGFR value was not up-to-date. For 1082 patients information could be obtained from the GP, resulting in 942 additional recent eGFR values. Finally, recent information on renal function was available for 72 % (n = 1179) of selected patients. Conclusion In patients using drugs that require renal monitoring, information on renal function is often unknown in the pharmacy. For the majority of patients this information can be retrieved from the GP.
Wang, S; Huang, G H
2013-03-15
Flood disasters have been extremely severe in recent decades, and they account for about one third of all natural catastrophes throughout the world. In this study, a two-stage mixed-integer fuzzy programming with interval-valued membership functions (TMFP-IMF) approach is developed for flood-diversion planning under uncertainty. TMFP-IMF integrates the fuzzy flexible programming, two-stage stochastic programming, and integer programming within a general framework. A concept of interval-valued fuzzy membership function is introduced to address complexities of system uncertainties. TMFP-IMF can not only deal with uncertainties expressed as fuzzy sets and probability distributions, but also incorporate pre-regulated water-diversion policies directly into its optimization process. TMFP-IMF is applied to a hypothetical case study of flood-diversion planning for demonstrating its applicability. Results indicate that reasonable solutions can be generated for binary and continuous variables. A variety of flood-diversion and capacity-expansion schemes can be obtained under four scenarios, which enable decision makers (DMs) to identify the most desired one based on their perceptions and attitudes towards the objective-function value and constraints. Copyright © 2013 Elsevier Ltd. All rights reserved.
1991-03-01
factor which made TTL-design so powerful was the implicit knowledge that for any object in the TTL Databook, that object’s implementation and...functions as values. Thus, its reasoning power matches the descriptive power of the higher order languages in the previous section. First, the definitions...developing parallel algorithms to better utilize the power of the explicitly parallel programming language constructs. Currently, the methodologies
Chao, Tian-Jy; Kim, Younghun
2015-02-03
Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.
Quantitative ptychographic reconstruction by applying a probe constraint
NASA Astrophysics Data System (ADS)
Reinhardt, J.; Schroer, C. G.
2018-04-01
The coherent scanning technique X-ray ptychography has become a routine tool for high-resolution imaging and nanoanalysis in various fields of research such as chemistry, biology or materials science. Often the ptychographic reconstruction results are analysed in order to yield absolute quantitative values for the object transmission and illuminating probe function. In this work, we address a common ambiguity encountered in scaling the object transmission and probe intensity via the application of an additional constraint to the reconstruction algorithm. A ptychographic measurement of a model sample containing nanoparticles is used as a test data set against which to benchmark in the reconstruction results depending on the type of constraint used. Achieving quantitative absolute values for the reconstructed object transmission is essential for advanced investigation of samples that are changing over time, e.g., during in-situ experiments or in general when different data sets are compared.
Gender, mass media and social change: a case study of TV commercials.
Gupta, A K; Jain, N
1998-01-01
Informing, entertaining, and persuading, mass media, especially television, is a powerful factor in the functioning of and change in any society. Mass media can be studied in its various roles as an agent of social change, a reflector of dominant values, and as a reinforcer of dominant values. Results from a 1997 spot survey of 150 television commercials presented on Doordarshan over a 4-week period support the role of the mass media in India as a reflector and reinforcer of dominant cultural values. By indirectly projecting the social norms on how women are expected to behave, television commercials have reproduced patriarchal values in India which are reinforced through the glamorization and naturalization of women's domestic roles, by glorifying the role of mother, by portraying women in public life in soft roles and subordinate jobs, and popularizing the image of women as sex objects and objects of beauty. Changes should be made in the way television in India portrays women to reflect their changing roles and positions in society.
NASA Astrophysics Data System (ADS)
Ye, Jing; Dang, Yaoguo; Li, Bingjun
2018-01-01
Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.
StrateGene: object-oriented programming in molecular biology.
Carhart, R E; Cash, H D; Moore, J F
1988-03-01
This paper describes some of the ways that object-oriented programming methodologies have been used to represent and manipulate biological information in a working application. When running on a Xerox 1100 series computer, StrateGene functions as a genetic engineering workstation for the management of information about cloning experiments. It represents biological molecules, enzymes, fragments, and methods as classes, subclasses, and members in a hierarchy of objects. These objects may have various attributes, which themselves can be defined and classified. The attributes and their values can be passed from the classes of objects down to the subclasses and members. The user can modify the objects and their attributes while using them. New knowledge and changes to the system can be incorporated relatively easily. The operations on the biological objects are associated with the objects themselves. This makes it easier to invoke them correctly and allows generic operations to be customized for the particular object.
Psek, Wayne; Davis, F Daniel; Gerrity, Gloria; Stametz, Rebecca; Bailey-Davis, Lisa; Henninger, Debra; Sellers, Dorothy; Darer, Jonathan
2016-01-01
Healthcare leaders need operational strategies that support organizational learning for continued improvement and value generation. The learning health system (LHS) model may provide leaders with such strategies; however, little is known about leaders' perspectives on the value and application of system-wide operationalization of the LHS model. The objective of this project was to solicit and analyze senior health system leaders' perspectives on the LHS and learning activities in an integrated delivery system. A series of interviews were conducted with 41 system leaders from a broad range of clinical and administrative areas across an integrated delivery system. Leaders' responses were categorized into themes. Ten major themes emerged from our conversations with leaders. While leaders generally expressed support for the concept of the LHS and enhanced system-wide learning, their concerns and suggestions for operationalization where strongly aligned with their functional area and strategic goals. Our findings suggests that leaders tend to adopt a very pragmatic approach to learning. Leaders expressed a dichotomy between the operational imperative to execute operational objectives efficiently and the need for rigorous evaluation. Alignment of learning activities with system-wide strategic and operational priorities is important to gain leadership support and resources. Practical approaches to addressing opportunities and challenges identified in the themes are discussed. Continuous learning is an ongoing, multi-disciplinary function of a health care delivery system. Findings from this and other research may be used to inform and prioritize system-wide learning objectives and strategies which support reliable, high value care delivery.
Combined Economic and Hydrologic Modeling to Support Collaborative Decision Making Processes
NASA Astrophysics Data System (ADS)
Sheer, D. P.
2008-12-01
For more than a decade, the core concept of the author's efforts in support of collaborative decision making has been a combination of hydrologic simulation and multi-objective optimization. The modeling has generally been used to support collaborative decision making processes. The OASIS model developed by HydroLogics Inc. solves a multi-objective optimization at each time step using a mixed integer linear program (MILP). The MILP can be configured to include any user defined objective, including but not limited too economic objectives. For example, an estimated marginal value for water for crops and M&I use were included in the objective function to drive trades in a model of the lower Rio Grande. The formulation of the MILP, constraints and objectives, in any time step is conditional: it changes based on the value of state variables and dynamic external forcing functions, such as rainfall, hydrology, market prices, arrival of migratory fish, water temperature, etc. It therefore acts as a dynamic short term multi-objective economic optimization for each time step. MILP is capable of solving a general problem that includes a very realistic representation of the physical system characteristics in addition to the normal multi-objective optimization objectives and constraints included in economic models. In all of these models, the short term objective function is a surrogate for achieving long term multi-objective results. The long term performance for any alternative (especially including operating strategies) is evaluated by simulation. An operating rule is the combination of conditions, parameters, constraints and objectives used to determine the formulation of the short term optimization in each time step. Heuristic wrappers for the simulation program have been developed improve the parameters of an operating rule, and are initiating research on a wrapper that will allow us to employ a genetic algorithm to improve the form of the rule (conditions, constraints, and short term objectives) as well. In the models operating rules represent different models of human behavior, and the objective of the modeling is to find rules for human behavior that perform well in terms of long term human objectives. The conceptual model used to represent human behavior incorporates economic multi-objective optimization for surrogate objectives, and rules that set those objectives based on current conditions and accounting for uncertainty, at least implicitly. The author asserts that real world operating rules follow this form and have evolved because they have been perceived as successful in the past. Thus, the modeling efforts focus on human behavior in much the same way that economic models focus on human behavior. This paper illustrates the above concepts with real world examples.
Multiple-3D-object secure information system based on phase shifting method and single interference.
Li, Wei-Na; Shi, Chen-Xiao; Piao, Mei-Lan; Kim, Nam
2016-05-20
We propose a multiple-3D-object secure information system for encrypting multiple three-dimensional (3D) objects based on the three-step phase shifting method. During the decryption procedure, five phase functions (PFs) are decreased to three PFs, in comparison with our previous method, which implies that one cross beam splitter is utilized to implement the single decryption interference. Moreover, the advantages of the proposed scheme also include: each 3D object can be decrypted discretionarily without decrypting a series of other objects earlier; the quality of the decrypted slice image of each object is high according to the correlation coefficient values, none of which is lower than 0.95; no iterative algorithm is involved. The feasibility of the proposed scheme is demonstrated by computer simulation results.
Optimal hemodynamic response model for functional near-infrared spectroscopy
Kamran, Muhammad A.; Jeong, Myung Yung; Mannan, Malik M. N.
2015-01-01
Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650–950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > tcritical and p-value < 0.05). PMID:26136668
Optimal hemodynamic response model for functional near-infrared spectroscopy.
Kamran, Muhammad A; Jeong, Myung Yung; Mannan, Malik M N
2015-01-01
Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650-950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > t critical and p-value < 0.05).
In Dialogue with the Decorative Arts
ERIC Educational Resources Information Center
Powell, Olivia
2017-01-01
How can museum educators create dialogical experiences with European decorative arts? This question frames my essay and stems from the challenges I have faced introducing objects whose original functions seem to overshadow their aesthetic and interpretive value. Repeated efforts to spark rich dialogue and collective interpretation around pieces of…
Institutional Management through Organization Development.
ERIC Educational Resources Information Center
Ferguson, Charles O.
This paper provides information on the role of organizational development in the institutional planning process at Florida Junior College (FJC), using short statements on the functions and objectives of each of the major components within the planning process. First, an overview is provided of organizational development and its value in…
Teaching about Ethics and the Environment.
ERIC Educational Resources Information Center
Brevard County School Board, Cocoa, FL.
This unit consists of activities designed to develop value systems related to the interactions of humans and their environment. The overall objectives are to teach students to evaluate their actions within an environmental context, make rational decisions in resolving environmental problems, and function in a democratic society by reaching…
Design of vibration isolation systems using multiobjective optimization techniques
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The design of vibration isolation systems is considered using multicriteria optimization techniques. The integrated values of the square of the force transmitted to the main mass and the square of the relative displacement between the main mass and the base are taken as the performance indices. The design of a three degrees-of-freedom isolation system with an exponentially decaying type of base disturbance is considered for illustration. Numerical results are obtained using the global criterion, utility function, bounded objective, lexicographic, goal programming, goal attainment and game theory methods. It is found that the game theory approach is superior in finding a better optimum solution with proper balance of the various objective functions.
Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos
2015-02-18
Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without linear bias to show optimal versus sub-optimal solution spaces. Basic analysis of the Actinobacillus succinogenes system using sampling shows that in order to achieve the maximal succinic acid production CO₂ must be taken into the system. Solutions involving release of CO₂ all give sub-optimal succinic acid production.
NASA Astrophysics Data System (ADS)
Riegels, Niels; Jessen, Oluf; Madsen, Henrik
2016-04-01
A multi-objective robust decision making approach is demonstrated that supports seasonal water management in the Chao Phraya River basin in Thailand. The approach uses multi-objective optimization to identify a Pareto-optimal set of management alternatives. Ensemble simulation is used to evaluate how each member of the Pareto set performs under a range of uncertain future conditions, and a robustness criterion is used to select a preferred alternative. Data mining tools are then used to identify ranges of uncertain factor values that lead to unacceptable performance for the preferred alternative. The approach is compared to a multi-criteria scenario analysis approach to estimate whether the introduction of additional complexity has the potential to improve decision making. Dry season irrigation in Thailand is managed through non-binding recommendations about the maximum extent of rice cultivation along with incentives for less water-intensive crops. Management authorities lack authority to prevent river withdrawals for irrigation when rice cultivation exceeds recommendations. In practice, this means that water must be provided to irrigate the actual planted area because of downstream municipal water supply requirements and water quality constraints. This results in dry season reservoir withdrawals that exceed planned withdrawals, reducing carryover storage to hedge against insufficient wet season runoff. The dry season planning problem in Thailand can therefore be framed in terms of decisions, objectives, constraints, and uncertainties. Decisions include recommendations about the maximum extent of rice cultivation and incentives for growing less water-intensive crops. Objectives are to maximize benefits to farmers, minimize the risk of inadequate carryover storage, and minimize incentives. Constraints include downstream municipal demands and water quality requirements. Uncertainties include the actual extent of rice cultivation, dry season precipitation, and precipitation in the following wet season. The multi-objective robust decision making approach is implemented as follows. First, three baseline simulation models are developed, including a crop water demand model, a river basin simulation model, and model of the impact of incentives on cropping patterns. The crop water demand model estimates irrigation water demands; the river basin simulation model estimates reservoir drawdown required to meet demands given forecasts of precipitation, evaporation, and runoff; the model of incentive impacts estimates the cost of incentives as function of marginal changes in rice yields. Optimization is used to find a set of non-dominated alternatives as a function of rice area and incentive decisions. An ensemble of uncertain model inputs is generated to represent uncertain hydrological and crop area forecasts. An ensemble of indicator values is then generated for each of the decision objectives: farmer benefits, end-of-wet-season reservoir storage, and the cost of incentives. A single alternative is selected from the Pareto set using a robustness criterion. Threshold values are defined for each of the objectives to identify ensemble members for which objective values are unacceptable, and the PRIM data mining algorithm is then used to identify input values associated with unacceptable model outcomes.
Sherrouse, Benson C.; Semmens, Darius J.; Clement, Jessica M.
2014-01-01
Despite widespread recognition that social-value information is needed to inform stakeholders and decision makers regarding trade-offs in environmental management, it too often remains absent from ecosystem service assessments. Although quantitative indicators of social values need to be explicitly accounted for in the decision-making process, they need not be monetary. Ongoing efforts to map such values demonstrate how they can also be made spatially explicit and relatable to underlying ecological information. We originally developed Social Values for Ecosystem Services (SolVES) as a tool to assess, map, and quantify nonmarket values perceived by various groups of ecosystem stakeholders. With SolVES 2.0 we have extended the functionality by integrating SolVES with Maxent maximum entropy modeling software to generate more complete social-value maps from available value and preference survey data and to produce more robust models describing the relationship between social values and ecosystems. The current study has two objectives: (1) evaluate how effectively the value index, a quantitative, nonmonetary social-value indicator calculated by SolVES, reproduces results from more common statistical methods of social-survey data analysis and (2) examine how the spatial results produced by SolVES provide additional information that could be used by managers and stakeholders to better understand more complex relationships among stakeholder values, attitudes, and preferences. To achieve these objectives, we applied SolVES to value and preference survey data collected for three national forests, the Pike and San Isabel in Colorado and the Bridger–Teton and the Shoshone in Wyoming. Value index results were generally consistent with results found through more common statistical analyses of the survey data such as frequency, discriminant function, and correlation analyses. In addition, spatial analysis of the social-value maps produced by SolVES provided information that was useful for explaining relationships between stakeholder values and forest uses. Our results suggest that SolVES can effectively reproduce information derived from traditional statistical analyses while adding spatially explicit, social-value information that can contribute to integrated resource assessment, planning, and management of forests and other ecosystems.
A Lagrange multiplier and Hopfield-type barrier function method for the traveling salesman problem.
Dang, Chuangyin; Xu, Lei
2002-02-01
A Lagrange multiplier and Hopfield-type barrier function method is proposed for approximating a solution of the traveling salesman problem. The method is derived from applications of Lagrange multipliers and a Hopfield-type barrier function and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the method searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that lower and upper bounds on variables are always satisfied automatically if the step length is a number between zero and one. At each iteration, the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the method converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the method seems more effective and efficient than the softassign algorithm.
Dang, C; Xu, L
2001-03-01
In this paper a globally convergent Lagrange and barrier function iterative algorithm is proposed for approximating a solution of the traveling salesman problem. The algorithm employs an entropy-type barrier function to deal with nonnegativity constraints and Lagrange multipliers to handle linear equality constraints, and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the nonnegativity constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the algorithm seems more effective and efficient than the softassign algorithm.
[A study on the diagnostic value of tear film objective scatter index in dry eye].
Su, Y D; Liang, Q F; Wang, N L; Antoine, Labbè
2017-09-11
Objective: To study the sensitivity and specificity of tear film objective scatter index to the diagnosis dry eye disease (DED). Methods: A prospective case-controlled study. Fifty-three patients with DED and 32 healthy age- and sex-matched control subjects were included from July to October 2016. All subjects underwent the examinations sequentially as follows: evaluation of ocular surface disease symptoms using the Ocular Surface Disease Index, optical quality detection, lipid layer thickness, tear film breakup time and SchirmerⅠtest. With Optical Quality Analysis SystemⅡ, the values of modulation transfer function cut off, basic objective scatter index (OSI) and total OSI were measured. To eliminate the influence of other refractive media, the tear film OSI (TF-OSI) was calculated, and the difference in TF-OSI between two groups was analyzed with the independent-samples t test. Spearman's correlation analysis was used to detect the correlation of each parameter in the DED group. With the receiver operating characteristic curve and the area under the curve (AUC), the specificity and sensitivity of TF-OSI and other parameters were described to differentiate DED from normal eyes. Results: In the dry eye group, the value of modulation transfer function cut off (32.07±11.95) was significantly lower than the normal group (39.38±9.44, t=- 3.096, P= 0.003) , and the mean value and dispersion of TF-OSI (0.50±0.43, 0.52±0.81) were higher than the normal group (0.21±0.16, 0.12±0.01) ( t= 4.300, P= 0.000, t= 3.546, P= 0.001) . The mean value of TF-OSI had a positive correlation with lipid layer thickness ( r= 0.365, P= 0.007) and dispersion of TF-OSI ( r= 0.581, P= 0.000), and a negative correlation with MTF cut off ( r=- 0.368, P= 0.007). To the diagnostic value of DED, the mean value of TF-OSI had a sensitivity of 0.736, a specificity of 0.762, and the AUC was 0.764. The dispersion of TF-OSI had a sensitivity of 0.811 and a specificity of 0.810, and the AUC was 0.900. Conclusion: In the DED group, the mean value and dispersion of TF-OSI were higher than the normal group. With its advantages, the TF-OSI may be a new method for the auxiliary diagnosis of dry eye. (Chin J Ophthalmol, 2017, 53: 668-674) .
A coarse-to-fine kernel matching approach for mean-shift based visual tracking
NASA Astrophysics Data System (ADS)
Liangfu, L.; Zuren, F.; Weidong, C.; Ming, J.
2009-03-01
Mean shift is an efficient pattern match algorithm. It is widely used in visual tracking fields since it need not perform whole search in the image space. It employs gradient optimization method to reduce the time of feature matching and realize rapid object localization, and uses Bhattacharyya coefficient as the similarity measure between object template and candidate template. This thesis presents a mean shift algorithm based on coarse-to-fine search for the best kernel matching. This paper researches for object tracking with large motion area based on mean shift. To realize efficient tracking of such an object, we present a kernel matching method from coarseness to fine. If the motion areas of the object between two frames are very large and they are not overlapped in image space, then the traditional mean shift method can only obtain local optimal value by iterative computing in the old object window area, so the real tracking position cannot be obtained and the object tracking will be disabled. Our proposed algorithm can efficiently use a similarity measure function to realize the rough location of motion object, then use mean shift method to obtain the accurate local optimal value by iterative computing, which successfully realizes object tracking with large motion. Experimental results show its good performance in accuracy and speed when compared with background-weighted histogram algorithm in the literature.
Real-time color measurement using active illuminant
NASA Astrophysics Data System (ADS)
Tominaga, Shoji; Horiuchi, Takahiko; Yoshimura, Akihiko
2010-01-01
This paper proposes a method for real-time color measurement using active illuminant. A synchronous measurement system is constructed by combining a high-speed active spectral light source and a high-speed monochrome camera. The light source is a programmable spectral source which is capable of emitting arbitrary spectrum in high speed. This system is the essential advantage of capturing spectral images without using filters in high frame rates. The new method of real-time colorimetry is different from the traditional method based on the colorimeter or the spectrometers. We project the color-matching functions onto an object surface as spectral illuminants. Then we can obtain the CIE-XYZ tristimulus values directly from the camera outputs at every point on the surface. We describe the principle of our colorimetric technique based on projection of the color-matching functions and the procedure for realizing a real-time measurement system of a moving object. In an experiment, we examine the performance of real-time color measurement for a static object and a moving object.
NASA Astrophysics Data System (ADS)
Zhu, Jing; Nie, Fan
2005-07-01
Objective: To research the effects of Intravascular low level laser irradiation (ILLLI) on the immulogic function of cells in treatment of psoriasis. Method: 49 patients suffered from psoriasis were treated by Intravascular low level laser irradiation (laser output power: 4-5mw, 1 hour per day, a course of treatment is 10 days). We checked the function of T lymphocyte subgroup and NK cell in peripheral blood between pre and post treatment. Results: 1.The mean value of CD3+ in post treatment is higher. P<0.05. Significant difference is showed between pre and post treatment 2. The mean value of CD4+ in post treatment dropped slightly while the mean value of CD4/CD8, NK cell in post treatment increased little, nearly approach the mean value of natural person. 3.The mean value of CD4+,CD8+,NK cell which is under 30% increased the percent obviously after the treatment; The mean value of CD4+,CD8+ u higher than 30% obviously drop the percent, P#0.05 and <0.01. Related statistical analysis showed significant and much significant difference between pre and post treatment. Conclusions: The low level laser irradiation (ILLLI) in treatment of psoriasis has bidirectional ajustive effect which can balance the immulogic function of cell.
2014-02-01
installation based on a Euclidean distance allocation and assigned that installation’s threshold values. The second approach used a thin - plate spline ...installation critical nLS+ thresholds involved spatial interpolation. A thin - plate spline radial basis functions (RBF) was selected as the...the interpolation of installation results using a thin - plate spline radial basis function technique. 6.5 OBJECTIVE #5: DEVELOP AND
NASA Astrophysics Data System (ADS)
Weber, V. L.
2018-03-01
We statistically analyze the images of the objects of the "light-line" and "half-plane" types which are observed through a randomly irregular air-water interface. The expressions for the correlation function of fluctuations of the image of an object given in the form of a luminous half-plane are found. The possibility of determining the spatial and temporal correlation functions of the slopes of a rough water surface from these relationships is shown. The problem of the probability of intersection of a small arbitrarily oriented line segment by the contour image of a luminous straight line is solved. Using the results of solving this problem, we show the possibility of determining the values of the curvature variances of a rough water surface. A practical method for obtaining an image of a rectilinear luminous object in the light rays reflected from the rough surface is proposed. It is theoretically shown that such an object can be synthesized by temporal accumulation of the image of a point source of light rapidly moving in the horizontal plane with respect to the water surface.
Generalized index for spatial data sets as a measure of complete spatial randomness
NASA Astrophysics Data System (ADS)
Hackett-Jones, Emily J.; Davies, Kale J.; Binder, Benjamin J.; Landman, Kerry A.
2012-06-01
Spatial data sets, generated from a wide range of physical systems can be analyzed by counting the number of objects in a set of bins. Previous work has been limited to equal-sized bins, which are inappropriate for some domains (e.g., circular). We consider a nonequal size bin configuration whereby overlapping or nonoverlapping bins cover the domain. A generalized index, defined in terms of a variance between bin counts, is developed to indicate whether or not a spatial data set, generated from exclusion or nonexclusion processes, is at the complete spatial randomness (CSR) state. Limiting values of the index are determined. Using examples, we investigate trends in the generalized index as a function of density and compare the results with those using equal size bins. The smallest bin size must be much larger than the mean size of the objects. We can determine whether a spatial data set is at the CSR state or not by comparing the values of a generalized index for different bin configurations—the values will be approximately the same if the data is at the CSR state, while the values will differ if the data set is not at the CSR state. In general, the generalized index is lower than the limiting value of the index, since objects do not have access to the entire region due to blocking by other objects. These methods are applied to two applications: (i) spatial data sets generated from a cellular automata model of cell aggregation in the enteric nervous system and (ii) a known plant data distribution.
NASA Astrophysics Data System (ADS)
Jawad, A.; Chattopadhyay, S.; Bhattacharya, S.; Pasqua, A.
2015-04-01
The objective of this paper is to discuss the Chameleon Brans-Dicke gravity with non-minimally matter coupling of scalar field. We take modified Holographic Ricci dark energy model in this gravity with its energy density in interaction with energy density of cold dark matter. We assume power-law ansatz for scale factor and scalar field to discuss potential as well as coupling functions in the evolving universe. These reconstructed functions are plotted versus scalar field and time for different values of power component of scale factor n. We observe that potential and coupling functions represent increasing behavior, in particular, consistent results for a specific value of n. Finally, we have examined validity of the generalized second law of thermodynamics and we have observed its validity for all values of n. The financial Supported from Department of Science and Technology, Govt. of India under Project Grant No. SR/FTP/PS-167/2011 is thankfully acknowledged by SC
Nystrom, Elizabeth A.; Burns, Douglas A.
2011-01-01
TOPMODEL uses a topographic wetness index computed from surface-elevation data to simulate streamflow and subsurface-saturation state, represented by the saturation deficit. Depth to water table was computed from simulated saturation-deficit values using computed soil properties. In the Fishing Brook Watershed, TOPMODEL was calibrated to the natural logarithm of streamflow at the study area outlet and depth to water table at Sixmile Wetland using a combined multiple-objective function. Runoff and depth to water table responded differently to some of the model parameters, and the combined multiple-objective function balanced the goodness-of-fit of the model realizations with respect to these parameters. Results show that TOPMODEL reasonably simulated runoff and depth to water table during the study period. The simulated runoff had a Nash-Sutcliffe efficiency of 0.738, but the model underpredicted total runoff by 14 percent. Depth to water table computed from simulated saturation-deficit values matched observed water-table depth moderately well; the root mean squared error of absolute depth to water table was 91 millimeters (mm), compared to the mean observed depth to water table of 205 mm. The correlation coefficient for temporal depth-to-water-table fluctuations was 0.624. The variability of the TOPMODEL simulations was assessed using prediction intervals grouped using the combined multiple-objective function. The calibrated TOPMODEL results for the entire study area were applied to several subwatersheds within the study area using computed hydrogeomorphic properties of the subwatersheds.
Effect of rice husk ash and fly ash on the compressive strength of high performance concrete
NASA Astrophysics Data System (ADS)
Van Lam, Tang; Bulgakov, Boris; Aleksandrova, Olga; Larsen, Oksana; Anh, Pham Ngoc
2018-03-01
The usage of industrial and agricultural wastes for building materials production plays an important role to improve the environment and economy by preserving nature materials and land resources, reducing land, water and air pollution as well as organizing and storing waste costs. This study mainly focuses on mathematical modeling dependence of the compressive strength of high performance concrete (HPC) at the ages of 3, 7 and 28 days on the amount of rice husk ash (RHA) and fly ash (FA), which are added to the concrete mixtures by using the Central composite rotatable design. The result of this study provides the second-order regression equation of objective function, the images of the surface expression and the corresponding contours of the objective function of the regression equation, as the optimal points of HPC compressive strength. These objective functions, which are the compressive strength values of HPC at the ages of 3, 7 and 28 days, depend on two input variables as: x1 (amount of RHA) and x2 (amount of FA). The Maple 13 program, solving the second-order regression equation, determines the optimum composition of the concrete mixture for obtaining high performance concrete and calculates the maximum value of the HPC compressive strength at the ages of 28 days. The results containMaxR28HPC = 76.716 MPa when RHA = 0.1251 and FA = 0.3119 by mass of Portland cement.
Metro nature, environmental health, and economic value
Kathleen L. Wolf; Alicia S.T. Robbins
2015-01-01
Background: Nearly 40 years of research provides an extensive body of evidence about human health, well-being, and improved function benefits associated with experiences of nearby nature in cities.Objectives: We demonstrate the numerous opportunities for future research efforts that link metro nature, human health and well-being outcomes,...
[Information value of "additional tasks" method to evaluate pilot's work load].
Gorbunov, V V
2005-01-01
"Additional task" method was used to evaluate pilot's work load in prolonged flight. Calculated through durations of latent periods of motor responses, quantitative criterion of work load is more informative for objective evaluation of pilot's involvement in his piloting functions rather than of other registered parameters.
Training Volunteers and Aides: An Inservice Teaching Packet.
ERIC Educational Resources Information Center
McBride, Deborah
This inservice packet is designed to guide teachers in training paraprofessionals to function in the school community. Complete lesson plans are included for six lessons of approximately 45 minutes to an hour. Specific measurable objectives are cited, and participant activities include: (1) discussing the value and use of paraprofessional…
ERIC Educational Resources Information Center
Simons, Jacob V., Jr.; Price, Barbara A.
2005-01-01
A recent classroom revelation caused us to reconsider the adequacy of the instructions offered in our textbooks for one of our most elementary quantitative methods. Specifically, we found that many students were mystified concerning how to pick an initial objective function value when plotting an isoprofit line in order to graphically solve a…
Factor Structure of the Chinese Virtues Questionnaire
ERIC Educational Resources Information Center
Duan, Wenjie; Ho, Samuel M. Y.; Yu, Bai; Tang, Xiaoqing; Zhang, Yonghong; Li, Tingting; Yuen, Tom
2012-01-01
Objectives: The present study examined the factorial invariance and functional equivalence of the Values in Action Inventory of Strengths (VIA-IS) among the Chinese. Methods: A total of 839 undergraduate students completed the 240-item Simplified Chinese version of the VIA-IS online. Another 40 students participated in qualitative interviews to…
The Press Conferences of Eleanor Roosevelt.
ERIC Educational Resources Information Center
Beasley, Maurine H.
Newly discovered transcriptions of 87 of First Lady Eleanor Roosevelt's women-only press conferences held from 1933 to 1945 make possible an examination of the objectives, topics, and value of these conferences. By holding the conferences, Mrs. Roosevelt attributed to women an important function in the political communication process, and at the…
Diabetes, Peripheral Neuropathy, and Lower Extremity Function
Chiles, Nancy S.; Phillips, Caroline L.; Volpato, Stefano; Bandinelli, Stefania; Ferrucci, Luigi; Guralnik, Jack M.; Patel, Kushang V.
2014-01-01
Objective Diabetes among older adults causes many complications, including decreased lower extremity function and physical disability. Diabetes can cause peripheral nerve dysfunction, which might be one pathway through which diabetes leads to decreased physical function. The study aims were to determine: (1) whether diabetes and impaired fasting glucose are associated with objective measures of physical function in older adults, (2) which peripheral nerve function (PNF) tests are associated with diabetes, and (3) whether PNF mediates the diabetes-physical function relationship. Research Design and Methods This study included 983 participants, age 65 and older from the InCHIANTI Study. Diabetes was diagnosed by clinical guidelines. Physical performance was assessed using the Short Physical Performance Battery (SPPB), scored from 0-12 (higher values, better physical function) and usual walking speed (m/s). PNF was assessed via standard surface electroneurographic study of right peroneal nerve conduction velocity, vibration and touch sensitivity. Clinical cut-points of PNF tests were used to create a neuropathy score from 0-5 (higher values, greater neuropathy). Multiple linear regression models were used to test associations. Results and Conclusion 12.8% (n=126) of participants had diabetes. Adjusting for age, sex, education, and other confounders, diabetic participants had decreased SPPB (β= −0.99; p< 0.01), decreased walking speed (β= −0.1m/s; p< 0.01), decreased nerve conduction velocity (β= −1.7m/s; p< 0.01), and increased neuropathy (β= 0.25; p< 0.01) compared to non-diabetic participants. Adjusting for nerve conduction velocity and neuropathy score decreased the effect of diabetes on SPPB by 20%, suggesting partial mediation through decreased PNF. PMID:24120281
Kurnianingsih, Yoanna A; Mullette-Gillman, O'Dhaniel A
2016-01-01
When deciding, we aim to choose the "best" possible outcome. This is not just selection of the option that is the most numerous or physically largest, as options are translated from objective value (count) to subjective value (worth or utility). We localized the neural instantiation of the value-to-utility transformation to the dorsal anterior midcingulate cortex (daMCC), with independent replication. The daMCC encodes the context-specific information necessary to convert from count to worth. This encoding is not simply a representation of utility or preference, but the interaction of the two. Specifically, the relationship of brain activation to value is dependent on individual preference, with both positive and negative slopes across the population depending on whether each individual's preference results in enhancement or diminishment of the valuation. For a given value, across participants, enhanced daMCC activation corresponds to diminished subjective valuation, deactivation to enhanced subjective valuation, and non-modulated activation with non-modulated subjective valuation. Further, functional connectivity analyses identified brain regions (positive connectivity with the inferior frontal gyrus and negative connectivity with the nucleus accumbens) through which contextual information may be integrated into the daMCC and allow for outputs to modulate valuation signals. All analyses were replicated through an independent within-study replication, with initial testing in the gains domain and replication in the intermixed and mirrored losses trials. We also present and discuss an ancillary finding: we were unable to identify parametric value signals for losses through whole-brain analyses, and ROI analyses of the vmPFC presented non-modulation across loss value levels. These results identify the neural locus of the value-to-utility transformation, and provide a specific computational function for the daMCC in the production of subjective valuation through the integration of value, context, and preferences.
Kurnianingsih, Yoanna A.; Mullette-Gillman, O'Dhaniel A.
2016-01-01
When deciding, we aim to choose the “best” possible outcome. This is not just selection of the option that is the most numerous or physically largest, as options are translated from objective value (count) to subjective value (worth or utility). We localized the neural instantiation of the value-to-utility transformation to the dorsal anterior midcingulate cortex (daMCC), with independent replication. The daMCC encodes the context-specific information necessary to convert from count to worth. This encoding is not simply a representation of utility or preference, but the interaction of the two. Specifically, the relationship of brain activation to value is dependent on individual preference, with both positive and negative slopes across the population depending on whether each individual's preference results in enhancement or diminishment of the valuation. For a given value, across participants, enhanced daMCC activation corresponds to diminished subjective valuation, deactivation to enhanced subjective valuation, and non-modulated activation with non-modulated subjective valuation. Further, functional connectivity analyses identified brain regions (positive connectivity with the inferior frontal gyrus and negative connectivity with the nucleus accumbens) through which contextual information may be integrated into the daMCC and allow for outputs to modulate valuation signals. All analyses were replicated through an independent within-study replication, with initial testing in the gains domain and replication in the intermixed and mirrored losses trials. We also present and discuss an ancillary finding: we were unable to identify parametric value signals for losses through whole-brain analyses, and ROI analyses of the vmPFC presented non-modulation across loss value levels. These results identify the neural locus of the value-to-utility transformation, and provide a specific computational function for the daMCC in the production of subjective valuation through the integration of value, context, and preferences. PMID:27881949
By-products of Opuntia ficus-indica as a source of antioxidant dietary fiber.
Bensadón, Sara; Hervert-Hernández, Deisy; Sáyago-Ayerdi, Sonia G; Goñi, Isabel
2010-09-01
Dietary fiber and bioactive compounds are widely used as functional ingredients in processed foods. The market in this field is competitive and the development of new types of quality ingredients for the food industry is on the rise. Opuntia ficus-indica (cactus pear) produces edible tender stems (cladodes) and fruits with a high nutritional value in terms of minerals, protein, dietary fiber and phytochemicals; however, around 20% of fresh weight of cladodes and 45% of fresh weight of fruits are by-products. The objective of this study was therefore to determine the nutritional value of by-products obtained from cladodes and fruits from two varieties of Opuntia ficus-indica, examining their dietary fiber and natural antioxidant compound contents in order to obtain quality ingredients for functional foods and increase the added value of these by-products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Tian-Jy; Kim, Younghun
Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function tomore » convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.« less
The improved business valuation model for RFID company based on the community mining method.
Li, Shugang; Yu, Zhaoxu
2017-01-01
Nowadays, the appetite for the investment and mergers and acquisitions (M&A) activity in RFID companies is growing rapidly. Although the huge number of papers have addressed the topic of business valuation models based on statistical methods or neural network methods, only a few are dedicated to constructing a general framework for business valuation that improves the performance with network graph (NG) and the corresponding community mining (CM) method. In this study, an NG based business valuation model is proposed, where real options approach (ROA) integrating CM method is designed to predict the company's net profit as well as estimate the company value. Three improvements are made in the proposed valuation model: Firstly, our model figures out the credibility of the node belonging to each community and clusters the network according to the evolutionary Bayesian method. Secondly, the improved bacterial foraging optimization algorithm (IBFOA) is adopted to calculate the optimized Bayesian posterior probability function. Finally, in IBFOA, bi-objective method is used to assess the accuracy of prediction, and these two objectives are combined into one objective function using a new Pareto boundary method. The proposed method returns lower forecasting error than 10 well-known forecasting models on 3 different time interval valuing tasks for the real-life simulation of RFID companies.
The improved business valuation model for RFID company based on the community mining method
Li, Shugang; Yu, Zhaoxu
2017-01-01
Nowadays, the appetite for the investment and mergers and acquisitions (M&A) activity in RFID companies is growing rapidly. Although the huge number of papers have addressed the topic of business valuation models based on statistical methods or neural network methods, only a few are dedicated to constructing a general framework for business valuation that improves the performance with network graph (NG) and the corresponding community mining (CM) method. In this study, an NG based business valuation model is proposed, where real options approach (ROA) integrating CM method is designed to predict the company’s net profit as well as estimate the company value. Three improvements are made in the proposed valuation model: Firstly, our model figures out the credibility of the node belonging to each community and clusters the network according to the evolutionary Bayesian method. Secondly, the improved bacterial foraging optimization algorithm (IBFOA) is adopted to calculate the optimized Bayesian posterior probability function. Finally, in IBFOA, bi-objective method is used to assess the accuracy of prediction, and these two objectives are combined into one objective function using a new Pareto boundary method. The proposed method returns lower forecasting error than 10 well-known forecasting models on 3 different time interval valuing tasks for the real-life simulation of RFID companies. PMID:28459815
Unal, Emre; Idilman, Ilkay Sedakat; Karçaaltıncaba, Muşturay
2017-02-01
New advances in liver magnetic resonance imaging (MRI) may enable diagnosis of unseen pathologies by conventional techniques. Normal T1 (550-620 ms for 1.5 T and 700-850 ms for 3 T), T2, T2* (>20 ms), T1rho (40-50 ms) mapping, proton density fat fraction (PDFF) (≤5%) and stiffness (2-3kPa) values can enable differentiation of a normal liver from chronic liver and diffuse diseases. Gd-EOB-DTPA can enable assessment of liver function by using postcontrast hepatobiliary phase or T1 reduction rate (normally above 60%). T1 mapping can be important for the assessment of fibrosis, amyloidosis and copper overload. T1rho mapping is promising for the assessment of liver collagen deposition. PDFF can allow objective treatment assessment in NAFLD and NASH patients. T2 and T2* are used for iron overload determination. MR fingerprinting may enable single slice acquisition and easy implementation of multiparametric MRI and follow-up of patients. Areas covered: T1, T2, T2*, PDFF and stiffness, diffusion weighted imaging, intravoxel incoherent motion imaging (ADC, D, D* and f values) and function analysis are reviewed. Expert commentary: Multiparametric MRI can enable biopsyless diagnosis and more objective staging of diffuse liver disease, cirrhosis and predisposing diseases. A comprehensive approach is needed to understand and overcome the effects of iron, fat, fibrosis, edema, inflammation and copper on MR relaxometry values in diffuse liver disease.
Functional sensibility of the hand in leprosy patients.
van Brakel, W H; Kets, C M; van Leerdam, M E; Khawas, I B; Gurung, K S
1997-03-01
The aims of this cross-sectional comparative study was to compare the results of Semmes-Weinstein monofilament testing (SWM) and moving 2-point discrimination (M2PD) with four tests of functional sensibility: recognition of objects, discrimination of size and texture and detection of dots. Ninety-eight leprosy in- and outpatients at Green Pastures Hospital in Pokhara, Nepal were tested with each of the above tests and the results were compared to see how well they agreed. Using the tests of functional sensibility as reference points, we examined the validity of the SWM and M2PD as predictors of functional sensibility. There was definite, but only moderate correlation between thresholds of monofilaments and M2PD and functional sensibility of the hand. A normal result with the SWM and/or M2PD had a good predictive value for normal functional sensibility. Sensitivity was reasonable against recognition of objects and discrimination of textures as reference tests (80-90% and 88-93%), but poor against discrimination of size and detection of dots (50-75% and 43-65%). Specificity was high for most combinations of SWM or M2PD with any of the tests of functional sensibility (85-99%). Above a monofilament threshold of 2 g, the predictive value of an abnormal test was 100% for dot detection and 83-92% for textural discrimination. This indicates that impairment of touch sensibility at this level correlates well with loss of dot detection and textural discrimination in patients with leprous neuropathy. For M2PD the pattern was very similar. Above a threshold of 5 mm, 95-100% of affected hands had loss of dot detection and 73-80% had loss of textural discrimination. Monofilament testing and M2PD did not seem suitable as proxy measures of functional sensibility of the hand in leprosy patients. However, a normal threshold with monofilaments and/or M2PD had a good predictive value for normal functional sensibility. Above a monofilament threshold of 2 g and/or a M2PD threshold of 5 mm, textural discrimination was abnormal in most hands.
2012-01-01
Background Economic viability of treatments for primary open-angle glaucoma (POAG) should be assessed objectively to prioritise health care interventions. This study aims to identify the methods for eliciting utility values (UVs) most sensitive to differences in visual field and visual functioning in patients with POAG. As a secondary objective, the dimensions of generic health-related and vision-related quality of life most affected by progressive vision loss will be identified. Methods A total of 132 POAG patients were recruited. Three sets of utility values (EuroQoL EQ-5D, Short Form SF-6D, Time Trade Off) and a measure of perceived visual functioning from the National Eye Institute Visual Function Questionnaire (VFQ-25) were elicited during face-to-face interviews. The sensitivity of UVs to differences in the binocular visual field, visual acuity and visual functioning measures was analysed using non-parametric statistical methods. Results Median utilities were similar across Integrated Visual Field score quartiles for EQ-5D (P = 0.08) whereas SF-6D and Time-Trade-Off UVs significantly decreased (p = 0.01 and p = 0.001, respectively). The VFQ-25 score varied across Integrated Visual Field and binocular visual acuity groups and was associated with all three UVs (P ≤ 0.001); most of its vision-specific sub-scales were associated with the vision markers. The most affected dimension was driving. A relationship with vision markers was found for the physical component of SF-36 and not for any dimension of EQ-5D. Conclusions The Time-Trade-Off was more sensitive than EQ-5D and SF-6D to changes in vision and visual functioning associated with glaucoma progression but could not measure quality of life changes in the mildest disease stages. PMID:22909264
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Yang, Yanxi
2018-05-01
We present a new wavelet ridge extraction method employing a novel cost function in two-dimensional wavelet transform profilometry (2-D WTP). First of all, the maximum value point is extracted from two-dimensional wavelet transform coefficient modulus, and the local extreme value points over 90% of maximum value are also obtained, they both constitute wavelet ridge candidates. Then, the gradient of rotate factor is introduced into the Abid's cost function, and the logarithmic Logistic model is used to adjust and improve the cost function weights so as to obtain more reasonable value estimation. At last, the dynamic programming method is used to accurately find the optimal wavelet ridge, and the wrapped phase can be obtained by extracting the phase at the ridge. Its advantage is that, the fringe pattern with low signal-to-noise ratio can be demodulated accurately, and its noise immunity will be better. Meanwhile, only one fringe pattern is needed to projected to measured object, so dynamic three-dimensional (3-D) measurement in harsh environment can be realized. Computer simulation and experimental results show that, for the fringe pattern with noise pollution, the 3-D surface recovery accuracy by the proposed algorithm is increased. In addition, the demodulation phase accuracy of Morlet, Fan and Cauchy mother wavelets are compared.
Operation of Power Grids with High Penetration of Wind Power
NASA Astrophysics Data System (ADS)
Al-Awami, Ali Taleb
The integration of wind power into the power grid poses many challenges due to its highly uncertain nature. This dissertation involves two main components related to the operation of power grids with high penetration of wind energy: wind-thermal stochastic dispatch and wind-thermal coordinated bidding in short-term electricity markets. In the first part, a stochastic dispatch (SD) algorithm is proposed that takes into account the stochastic nature of the wind power output. The uncertainty associated with wind power output given the forecast is characterized using conditional probability density functions (CPDF). Several functions are examined to characterize wind uncertainty including Beta, Weibull, Extreme Value, Generalized Extreme Value, and Mixed Gaussian distributions. The unique characteristics of the Mixed Gaussian distribution are then utilized to facilitate the speed of convergence of the SD algorithm. A case study is carried out to evaluate the effectiveness of the proposed algorithm. Then, the SD algorithm is extended to simultaneously optimize the system operating costs and emissions. A modified multi-objective particle swarm optimization algorithm is suggested to identify the Pareto-optimal solutions defined by the two conflicting objectives. A sensitivity analysis is carried out to study the effect of changing load level and imbalance cost factors on the Pareto front. In the second part of this dissertation, coordinated trading of wind and thermal energy is proposed to mitigate risks due to those uncertainties. The problem of wind-thermal coordinated trading is formulated as a mixed-integer stochastic linear program. The objective is to obtain the optimal tradeoff bidding strategy that maximizes the total expected profits while controlling trading risks. For risk control, a weighted term of the conditional value at risk (CVaR) is included in the objective function. The CVaR aims to maximize the expected profits of the least profitable scenarios, thus improving trading risk control. A case study comparing coordinated with uncoordinated bidding strategies depending on the trader's risk attitude is included. Simulation results show that coordinated bidding can improve the expected profits while significantly improving the CVaR.
Cloaking of arbitrarily shaped objects with homogeneous coatings
NASA Astrophysics Data System (ADS)
Forestiere, Carlo; Dal Negro, Luca; Miano, Giovanni
2014-05-01
We present a theory for the cloaking of arbitrarily shaped objects and demonstrate electromagnetic scattering cancellation through designed homogeneous coatings. First, in the small-particle limit, we expand the dipole moment of a coated object in terms of its resonant modes. By zeroing the numerator of the resulting rational function, we accurately predict the permittivity values of the coating layer that abates the total scattered power. Then, we extend the applicability of the method beyond the small-particle limit, deriving the radiation corrections of the scattering-cancellation permittivity within a perturbation approach. Our method permits the design of invisibility cloaks for irregularly shaped devices such as complex sensors and detectors.
The disturbing function for polar Centaurs and transneptunian objects
NASA Astrophysics Data System (ADS)
Namouni, F.; Morais, M. H. M.
2017-10-01
The classical disturbing function of the three-body problem is based on an expansion of the gravitational interaction in the vicinity of nearly coplanar orbits. Consequently, it is not suitable for the identification and study of resonances of the Centaurs and transneptunian objects on nearly polar orbits with the Solar system planets. Here, we provide a series expansion algorithm of the gravitational interaction in the vicinity of polar orbits and produce explicitly the disturbing function to fourth order in eccentricity and inclination cosine. The properties of the polar series differ significantly from those of the classical disturbing function: the polar series can model any resonance, as the expansion order is not related to the resonance order. The powers of eccentricity and inclination of the force amplitude of a p:q resonance do not depend on the value of the resonance order |p - q| but only on its parity. Thus, all even resonance order eccentricity amplitudes are ∝e2 and odd ones ∝e to lowest order in eccentricity e. With the new findings on the structure of the polar disturbing function and the possible resonant critical arguments, we illustrate the dynamics of the polar resonances 1:3, 3:1, 2:9 and 7:9 where transneptunian object 471325 could currently be locked.
Estimation of scattering object characteristics for image reconstruction using a nonzero background.
Jin, Jing; Astheimer, Jeffrey; Waag, Robert
2010-06-01
Two methods are described to estimate the boundary of a 2-D penetrable object and the average sound speed in the object. One method is for circular objects centered in the coordinate system of the scattering observation. This method uses an orthogonal function expansion for the scattering. The other method is for noncircular, essentially convex objects. This method uses cross correlation to obtain time differences that determine a family of parabolas whose envelope is the boundary of the object. A curve-fitting method and a phase-based method are described to estimate and correct the offset of an uncentered radial or elliptical object. A method based on the extinction theorem is described to estimate absorption in the object. The methods are applied to calculated scattering from a circular object with an offset and to measured scattering from an offset noncircular object. The results show that the estimated boundaries, sound speeds, and absorption slopes agree very well with independently measured or true values when the assumptions of the methods are reasonably satisfied.
Design of Magnetic Charged Particle Lens Using Analytical Potential Formula
NASA Astrophysics Data System (ADS)
Al-Batat, A. H.; Yaseen, M. J.; Abbas, S. R.; Al-Amshani, M. S.; Hasan, H. S.
2018-05-01
In the current research was to benefit from the potential of the two cylindrical electric lenses to be used in the product a mathematical model from which, one can determine the magnetic field distribution of the charged particle objective lens. With aid of simulink in matlab environment, some simulink models have been building to determine the distribution of the target function and their related axial functions along the optical axis of the charged particle lens. The present study showed that the physical parameters (i.e., the maximum value, Bmax, and the half width W of the field distribution) and the objective properties of the charged particle lens have been affected by varying the main geometrical parameter of the lens named the bore radius R.
3D Reasoning from Blocks to Stability.
Zhaoyin Jia; Gallagher, Andrew C; Saxena, Ashutosh; Chen, Tsuhan
2015-05-01
Objects occupy physical space and obey physical laws. To truly understand a scene, we must reason about the space that objects in it occupy, and how each objects is supported stably by each other. In other words, we seek to understand which objects would, if moved, cause other objects to fall. This 3D volumetric reasoning is important for many scene understanding tasks, ranging from segmentation of objects to perception of a rich 3D, physically well-founded, interpretations of the scene. In this paper, we propose a new algorithm to parse a single RGB-D image with 3D block units while jointly reasoning about the segments, volumes, supporting relationships, and object stability. Our algorithm is based on the intuition that a good 3D representation of the scene is one that fits the depth data well, and is a stable, self-supporting arrangement of objects (i.e., one that does not topple). We design an energy function for representing the quality of the block representation based on these properties. Our algorithm fits 3D blocks to the depth values corresponding to image segments, and iteratively optimizes the energy function. Our proposed algorithm is the first to consider stability of objects in complex arrangements for reasoning about the underlying structure of the scene. Experimental results show that our stability-reasoning framework improves RGB-D segmentation and scene volumetric representation.
Reduction of Pulmonary Function After Surgical Lung Resections of Different Volume
Cukic, Vesna
2014-01-01
Introduction: In recent years an increasing number of lung resections are being done because of the rising prevalence of lung cancer that occurs mainly in patients with limited lung function, what is caused with common etiologic factor - smoking cigarettes. Objective: To determine how big the loss of lung function is after surgical resection of lung of different range. Methods: The study was done on 58 patients operated at the Clinic for thoracic surgery KCU Sarajevo, previously treated at the Clinic for pulmonary diseases “Podhrastovi” in the period from 01.06.2012. to 01.06.2014. The following resections were done: pulmectomy (left, right), lobectomy (upper, lower: left and right). The values of postoperative pulmonary function were compared with preoperative ones. As a parameter of lung function we used FEV1 (forced expiratory volume in one second), and changes in FEV1 are expressed in liters and in percentage of the recorded preoperative and normal values of FEV1. Measurements of lung function were performed seven days before and 2 months after surgery. Results: Postoperative FEV1 was decreased compared to preoperative values. After pulmectomy the maximum reduction of FEV1 was 44%, and after lobectomy it was 22% of the preoperative values. Conclusion: Patients with airway obstruction are limited in their daily life before the surgery, and an additional loss of lung tissue after resection contributes to their inability. Potential benefits of lung resection surgery should be balanced in relation to postoperative morbidity and mortality. PMID:25568542
Gode, Sercan; Benzer, Murat; Uslu, Mustafa; Kaya, Isa; Midilli, Rasit; Karci, Bulent
2018-02-01
Severe dorsal deviations in crooked noses are treated by either in situ septoplasty with asymmetric spreader grafts (ISS) or extracorporeal subtotal septal reconstruction (ECS). To our knowledge, except one retrospective study, there is no other that compares the objective and subjective results of these two treatment modalities. The aim of this study was to compare the aesthetic and functional outcomes of ECS and ISS in crooked noses. This study was carried out on 40 patients (ISS in 20 patients and ECS in 20 patients) who underwent external rhinoplasty surgery due to crooked noses between May 2014 and January 2016. While performing rhinoplasty on the patients, the decision of whether to use the ECS or ISS technique was randomized in a sequential fashion. Surgical outcomes were assessed and compared using the anthropometric measurement of photographs with Rhinobase software. Subjective assessments of nasal obstruction and aesthetic satisfaction were evaluated with a visual analog scale. There was a significant difference between rhinion deviation angle, supratip deviation angle (SDA) and tip deviation angle pre- and postoperatively in the ECS group, whereas in the ISS group, except SDA, all other postoperative angles were significantly improved from preoperative values (p = 0.218). The nasal tip projection in the ECS and ISS groups was 29.48, 31.5 preoperatively and 29.78, 31.26 postoperatively. The mean postoperative nasal tip projection value (p > 0.005) did not change significantly compared to the preoperative value in both groups. The mean postoperative value of nasolabial (p = 0.226) angle did not change significantly compared to the mean preoperative one in the ECS group. However, in the ISS group, the mean postoperative value of nasolabial (p = 0.001) angle significantly improved compared to the mean preoperative value. There was significant improvement in both groups, while improvements in both functional and aesthetic outcomes were much higher in the extracorporeal group. None of the patients had postoperative nasal obstruction that required revision surgery. One patient underwent revision rhinoplasty due to an irregularity on the nasal dorsum in the ECS group. This is the first study that compares subjective and objective aesthetic and functional outcomes of crooked nose surgery according to two common septoplasty techniques in a randomized self-controlled fashion. This study was effective in both objectively and subjectively comparing the functional and aesthetic aspect of the patients submitted to two common different techniques of treatment of nasal deviations in crooked nose patients. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Psek, Wayne; Davis, F. Daniel; Gerrity, Gloria; Stametz, Rebecca; Bailey-Davis, Lisa; Henninger, Debra; Sellers, Dorothy; Darer, Jonathan
2016-01-01
Introduction: Healthcare leaders need operational strategies that support organizational learning for continued improvement and value generation. The learning health system (LHS) model may provide leaders with such strategies; however, little is known about leaders’ perspectives on the value and application of system-wide operationalization of the LHS model. The objective of this project was to solicit and analyze senior health system leaders’ perspectives on the LHS and learning activities in an integrated delivery system. Methods: A series of interviews were conducted with 41 system leaders from a broad range of clinical and administrative areas across an integrated delivery system. Leaders’ responses were categorized into themes. Findings: Ten major themes emerged from our conversations with leaders. While leaders generally expressed support for the concept of the LHS and enhanced system-wide learning, their concerns and suggestions for operationalization where strongly aligned with their functional area and strategic goals. Discussion: Our findings suggests that leaders tend to adopt a very pragmatic approach to learning. Leaders expressed a dichotomy between the operational imperative to execute operational objectives efficiently and the need for rigorous evaluation. Alignment of learning activities with system-wide strategic and operational priorities is important to gain leadership support and resources. Practical approaches to addressing opportunities and challenges identified in the themes are discussed. Conclusion: Continuous learning is an ongoing, multi-disciplinary function of a health care delivery system. Findings from this and other research may be used to inform and prioritize system-wide learning objectives and strategies which support reliable, high value care delivery. PMID:27683668
Anisotropic yield function capable of predicting eight ears
NASA Astrophysics Data System (ADS)
Yoon, J. H.; Cazacu, O.
2011-08-01
Deep drawing of a cylindrical cup from a rolled sheet is one of the typical forming operations where the effect of this anisotropy is most evident. Indeed, it is well documented in the literature that the number of ears and the shape of the earing pattern correlate with the r-values profile. For the strongly textured aluminum alloy AA 5042 (Numisheet Benchmark 2011), the experimental r-value distribution has two minima between the rolling and transverse direction data provided for this show that the r-value along the transverse direction (TD) is five times larger than the value corresponding to the rolling direction. Therefore, it is expected that there are more that the earing profile has more than four ears. The main objective of this paper is to assess whether a new form of CPB06ex2 yield function (Plunkett et al. (2008)) tailored for metals with no tension-compression asymmetry is capable of predicting more than four ears for this material.
Navarro, Xavier
2016-02-01
Peripheral nerve injuries usually lead to severe loss of motor, sensory and autonomic functions in the patients. Due to the complex requirements for adequate axonal regeneration, functional recovery is often poorly achieved. Experimental models are useful to investigate the mechanisms related to axonal regeneration and tissue reinnervation, and to test new therapeutic strategies to improve functional recovery. Therefore, objective and reliable evaluation methods should be applied for the assessment of regeneration and function restitution after nerve injury in animal models. This review gives an overview of the most useful methods to assess nerve regeneration, target reinnervation and recovery of complex sensory and motor functions, their values and limitations. The selection of methods has to be adequate to the main objective of the research study, either enhancement of axonal regeneration, improving regeneration and reinnervation of target organs by different types of nerve fibres, or increasing recovery of complex sensory and motor functions. It is generally recommended to use more than one functional method for each purpose, and also to perform morphological studies of the injured nerve and the reinnervated targets. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
A Study of the Congruency of Competencies and Criterion-Referenced Measures.
ERIC Educational Resources Information Center
Jones, John Wilbur, Jr.
The job of the 4-H extension agent involves fairly complex levels of performance. The curriculum for the extension agent program should produce youth workers who have the ability to perform competently and who possess the basic concepts and values required to function effectively. Performance objectives were written for each competency considered…
USDA-ARS?s Scientific Manuscript database
Cymbopogon flexuosus and C. martinii are perennial grasses grown to produce essential oils for the fragrance industry. The objectives of this study were (1) to evaluate biomass and oil yields as a function of nitrogen and sulfur fertilization, and (2) to characterize their utility for lignocellulosi...
Discovering the Laplace Transform in Undergraduate Differential Equations
ERIC Educational Resources Information Center
Quinn, Terrance J.; Rai, Sanjay
2008-01-01
The Laplace Transform is an object of fundamental importance in pure and applied mathematics. In addition, it has special pedagogical value in that it can provide a natural and concrete setting for a student to begin thinking about the modern concepts of "operator" and "functional". Most undergraduate textbooks, however, merely define the…
Accounting for range uncertainties in the optimization of intensity modulated proton therapy.
Unkelbach, Jan; Chan, Timothy C Y; Bortfeld, Thomas
2007-05-21
Treatment plans optimized for intensity modulated proton therapy (IMPT) may be sensitive to range variations. The dose distribution may deteriorate substantially when the actual range of a pencil beam does not match the assumed range. We present two treatment planning concepts for IMPT which incorporate range uncertainties into the optimization. The first method is a probabilistic approach. The range of a pencil beam is assumed to be a random variable, which makes the delivered dose and the value of the objective function a random variable too. We then propose to optimize the expectation value of the objective function. The second approach is a robust formulation that applies methods developed in the field of robust linear programming. This approach optimizes the worst case dose distribution that may occur, assuming that the ranges of the pencil beams may vary within some interval. Both methods yield treatment plans that are considerably less sensitive to range variations compared to conventional treatment plans optimized without accounting for range uncertainties. In addition, both approaches--although conceptually different--yield very similar results on a qualitative level.
Information filtering via a scaling-based function.
Qiu, Tian; Zhang, Zi-Ke; Chen, Guang
2013-01-01
Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem.
SASS wind ambiguity removal by direct minimization. [Seasat-A satellite scatterometer
NASA Technical Reports Server (NTRS)
Hoffman, R. N.
1982-01-01
An objective analysis procedure is presented which combines Seasat-A satellite scatterometer (SASS) data with other available data on wind speeds by minimizing an objective function of gridded wind speed values. The functions are defined as the loss functions for the SASS velocity data, the forecast, the SASS velocity magnitude data, and conventional wind speed data. Only aliases closest to the analysis were included, and a method for improving the first guess while using a minimization technique and slowly changing the parameters of the problem is introduced. The model is employed to predict the wind field for the North Atlantic on Sept. 10, 1978. Dealiased SASS data is compared with available ship readings, showing good agreement between the SASS dealiased winds and the winds measured at the surface. Expansion of the model to take in low-level cloud measurements, pressure data, and convergence and cloud level data correlations is discussed.
Debunking the Myth of Value-Neutral Virginity: Toward Truth in Scientific Advertising
Mandel, David R.; Tetlock, Philip E.
2016-01-01
The scientific community often portrays science as a value-neutral enterprise that crisply demarcates facts from personal value judgments. We argue that this depiction is unrealistic and important to correct because science serves an important knowledge generation function in all modern societies. Policymakers often turn to scientists for sound advice, and it is important for the wellbeing of societies that science delivers. Nevertheless, scientists are human beings and human beings find it difficult to separate the epistemic functions of their judgments (accuracy) from the social-economic functions (from career advancement to promoting moral-political causes that “feel self-evidently right”). Drawing on a pluralistic social functionalist framework that identifies five functionalist mindsets—people as intuitive scientists, economists, politicians, prosecutors, and theologians—we consider how these mindsets are likely to be expressed in the conduct of scientists. We also explore how the context of policymaker advising is likely to activate or de-activate scientists’ social functionalist mindsets. For instance, opportunities to advise policymakers can tempt scientists to promote their ideological beliefs and values, even if advising also brings with it additional accountability pressures. We end prescriptively with an appeal to scientists to be more circumspect in characterizing their objectivity and honesty and to reject idealized representations of scientific behavior that inaccurately portray scientists as value-neutral virgins. PMID:27064318
Debunking the Myth of Value-Neutral Virginity: Toward Truth in Scientific Advertising.
Mandel, David R; Tetlock, Philip E
2016-01-01
The scientific community often portrays science as a value-neutral enterprise that crisply demarcates facts from personal value judgments. We argue that this depiction is unrealistic and important to correct because science serves an important knowledge generation function in all modern societies. Policymakers often turn to scientists for sound advice, and it is important for the wellbeing of societies that science delivers. Nevertheless, scientists are human beings and human beings find it difficult to separate the epistemic functions of their judgments (accuracy) from the social-economic functions (from career advancement to promoting moral-political causes that "feel self-evidently right"). Drawing on a pluralistic social functionalist framework that identifies five functionalist mindsets-people as intuitive scientists, economists, politicians, prosecutors, and theologians-we consider how these mindsets are likely to be expressed in the conduct of scientists. We also explore how the context of policymaker advising is likely to activate or de-activate scientists' social functionalist mindsets. For instance, opportunities to advise policymakers can tempt scientists to promote their ideological beliefs and values, even if advising also brings with it additional accountability pressures. We end prescriptively with an appeal to scientists to be more circumspect in characterizing their objectivity and honesty and to reject idealized representations of scientific behavior that inaccurately portray scientists as value-neutral virgins.
Robust Representation of Stable Object Values in the Oculomotor Basal Ganglia
Yasuda, Masaharu; Yamamoto, Shinya; Hikosaka, Okihide
2012-01-01
Our gaze tends to be directed to objects previously associated with rewards. Such object values change flexibly or remain stable. Here we present evidence that the monkey substantia nigra pars reticulata (SNr) in the basal ganglia represents stable, rather than flexible, object values. After across-day learning of object–reward association, SNr neurons gradually showed a response bias to surprisingly many visual objects: inhibition to high-valued objects and excitation to low-valued objects. Many of these neurons were shown to project to the ipsilateral superior colliculus. This neuronal bias remained intact even after >100 d without further learning. In parallel with the neuronal bias, the monkeys tended to look at high-valued objects. The neuronal and behavioral biases were present even if no value was associated during testing. These results suggest that SNr neurons bias the gaze toward objects that were consistently associated with high values in one’s history. PMID:23175843
Rudebeck, Peter H.; Murray, Elisabeth A.
2014-01-01
The primate orbitofrontal cortex (OFC) is often treated as a single entity, but architectonic and connectional neuroanatomy indicates that it has distinguishable parts. Nevertheless, few studies have attempted to dissociate the functions of its subregions. Here we review findings from recent neuropsychological and neurophysiological studies that do so. The lateral OFC seems to be important for learning, representing and updating specific object–reward associations. Medial OFC seems to be important for value comparisons and choosing among objects on that basis. Rather than viewing this dissociation of function in terms of learning versus choosing, however, we suggest that it reflects the distinction between contrasts and comparisons: differences versus similarities. Making use of high-dimensional representations that arise from the convergence of several sensory modalities, the lateral OFC encodes contrasts among outcomes. The medial MFC reduces these contrasting representations of value to a single dimension, a common currency, in order to compare alternative choices. PMID:22145870
Kim, Jeeyong; Cho, Chi Hyun; Jung, Bo Kyeung; Nam, Jeonghun; Seo, Hong Seog; Shin, Sehyun; Lim, Chae Seung
2018-04-14
The objective of this study was to comparatively evaluate three commercial whole-blood platelet function analyzer systems: Platelet Function Analyzer-200 (PFA; Siemens Canada, Mississauga, Ontario, Canada), Multiplate analyzer (MP; Roche Diagnostics International Ltd., Rotkreuz, Switzerland), and Plateletworks Combo-25 kit (PLW; Helena Laboratories, Beaumont, TX, USA). Venipuncture was performed on 160 patients who visited a department of cardiology. Pairwise agreement among the three platelet function assays was assessed using Cohen's kappa coefficient and percent agreement within the reference limit. Kappa values with the same agonists were poor between PFA-collagen (COL; agonist)/adenosine diphosphate (ADP) and MP-ADP (-0.147), PFA-COL/ADP and PLW-ADP (0.089), MP-ADP and PLW-ADP (0.039), PFA-COL/ADP and MP-COL (-0.039), and between PFA-COL/ADP and PLW-COL (-0.067). Nonetheless, kappa values for the same assay principle with a different agonist were slightly higher between PFA-COL/ADP and PFA-COL/EPI (0.352), MP-ADP and MP-COL (0.235), and between PLW-ADP and PLW-COL (0.247). The range of percent agreement values was 38.7% to 73.8%. Therefore, various measurements of platelet function by more than one method were needed to obtain a reliable interpretation of platelet function considering low kappa coefficient and modest percent agreement rates among 3 different platelet function tests.
Kostuj, Tanja; Stief, Felix; Hartmann, Kirsten Anna; Schaper, Katharina; Arabmotlagh, Mohammad; Baums, Mike H; Meurer, Andrea; Krummenauer, Frank; Lieske, Sebastian
2018-01-01
Objective After cross-cultural adaption for the German translation of the Ankle-Hindfoot Scale of the American Orthopaedic Foot and Ankle Society (AOFAS-AHS) and agreement analysis with the Foot Function Index (FFI-D), the following gait analysis study using the Oxford Foot Model (OFM) was carried out to show which of the two scores better correlates with objective gait dysfunction. Design and participants Results of the AOFAS-AHS and FFI-D, as well as data from three-dimensional gait analysis were collected from 20 patients with mild to severe ankle and hindfoot pathologies. Kinematic and kinetic gait data were correlated with the results of the total AOFAS scale and FFI-D as well as the results of those items representing hindfoot function in the AOFAS-AHS assessment. With respect to the foot disorders in our patients (osteoarthritis and prearthritic conditions), we correlated the total range of motion (ROM) in the ankle and subtalar joints as identified by the OFM with values identified during clinical examination ‘translated’ into score values. Furthermore, reduced walking speed, reduced step length and reduced maximum ankle power generation during push-off were taken into account and correlated to gait abnormalities described in the scores. An analysis of correlations with CIs between the FFI-D and the AOFAS-AHS items and the gait parameters was performed by means of the Jonckheere-Terpstra test; furthermore, exploratory factor analysis was applied to identify common information structures and thereby redundancy in the FFI-D and the AOFAS-AHS items. Results Objective findings for hindfoot disorders, namely a reduced ROM, in the ankle and subtalar joints, respectively, as well as reduced ankle power generation during push-off, showed a better correlation with the AOFAS-AHS total score—as well as AOFAS-AHS items representing ROM in the ankle, subtalar joints and gait function—compared with the FFI-D score. Factor analysis, however, could not identify FFI-D items consistently related to these three indicator parameters (pain, disability and function) found in the AOFAS-AHS. Furthermore, factor analysis did not support stratification of the FFI-D into two subscales. Conclusions The AOFAS-AHS showed a good agreement with objective gait parameters and is therefore better suited to evaluate disability and functional limitations of patients suffering from foot and ankle pathologies compared with the FFI-D. PMID:29626046
NASA Astrophysics Data System (ADS)
Xiao, Jing-Lin
2016-11-01
We study the ground state energy and the mean number of LO phonons of the strong-coupling polaron in a RbCl quantum pseudodot (QPD) with hydrogen-like impurity at the center. The variations of the ground state energy and the mean number of LO phonons with the temperature and the strength of the Coulombic impurity potential are obtained by employing the variational method of Pekar type and the quantum statistical theory (VMPTQST). Our numerical results have displayed that [InlineMediaObject not available: see fulltext.] the absolute value of the ground state energy increases (decreases) when the temperature increases at lower (higher) temperature regime, [InlineMediaObject not available: see fulltext.] the mean number of the LO phonons increases with increasing temperature, [InlineMediaObject not available: see fulltext.] the absolute value of ground state energy and the mean number of LO phonons are increasing functions of the strength of the Coulombic impurity potential.
NASA Astrophysics Data System (ADS)
Zheng, Qin; Yang, Zubin; Sha, Jianxin; Yan, Jun
2017-02-01
In predictability problem research, the conditional nonlinear optimal perturbation (CNOP) describes the initial perturbation that satisfies a certain constraint condition and causes the largest prediction error at the prediction time. The CNOP has been successfully applied in estimation of the lower bound of maximum predictable time (LBMPT). Generally, CNOPs are calculated by a gradient descent algorithm based on the adjoint model, which is called ADJ-CNOP. This study, through the two-dimensional Ikeda model, investigates the impacts of the nonlinearity on ADJ-CNOP and the corresponding precision problems when using ADJ-CNOP to estimate the LBMPT. Our conclusions are that (1) when the initial perturbation is large or the prediction time is long, the strong nonlinearity of the dynamical model in the prediction variable will lead to failure of the ADJ-CNOP method, and (2) when the objective function has multiple extreme values, ADJ-CNOP has a large probability of producing local CNOPs, hence making a false estimation of the LBMPT. Furthermore, the particle swarm optimization (PSO) algorithm, one kind of intelligent algorithm, is introduced to solve this problem. The method using PSO to compute CNOP is called PSO-CNOP. The results of numerical experiments show that even with a large initial perturbation and long prediction time, or when the objective function has multiple extreme values, PSO-CNOP can always obtain the global CNOP. Since the PSO algorithm is a heuristic search algorithm based on the population, it can overcome the impact of nonlinearity and the disturbance from multiple extremes of the objective function. In addition, to check the estimation accuracy of the LBMPT presented by PSO-CNOP and ADJ-CNOP, we partition the constraint domain of initial perturbations into sufficiently fine grid meshes and take the LBMPT obtained by the filtering method as a benchmark. The result shows that the estimation presented by PSO-CNOP is closer to the true value than the one by ADJ-CNOP with the forecast time increasing.
The visual perception of metal.
Todd, James T; Norman, J Farley
2018-03-01
The present research was designed to examine how the presence or absence of ambient light influences the appearance of metal. The stimuli depicted three possible objects that were illuminated by three possible patterns of illumination. These were generated by a single point light source, two rectangular area lights, or projecting light onto a translucent white box that contained the object (and the camera) so that the object would be illuminated by ambient light in all directions. The materials were simulated using measured parameters of chrome with four different levels of roughness. Observers rated the metallic appearance and shininess of each depicted object using two sliders. The highest rated appearance of metal and shininess occurred for the surfaces with the lowest roughness in the ambient illumination condition, and these ratings dropped systematically as the roughness was increased. For the objects illuminated by point or area lights, the appearance of metal and shininess were significantly less than in the ambient conditions for the lowest roughness value, and significantly greater than in the ambient condition for intermediate values of roughness. We also included a control condition depicting objects with a shiny plastic reflectance function that had both diffuse and specular components. These objects were rated as highly shiny but they did not appear metallic. A theoretical hypothesis is proposed that the defining characteristic of metal (as opposed to black plastic) is the presence of specular sheen over most of the visible surface area.
NASA Technical Reports Server (NTRS)
English, J. M.; Smith, J. L.; Lifson, M. W.
1978-01-01
The objectives of this study are: (1) to determine a unified methodological framework for the comparison of intercity passenger and freight transportation systems; (2) to review the attributes of existing and future transportation systems for the purpose of establishing measures of comparison. These objectives were made more specific to include: (1) development of a methodology for comparing long term transportation trends arising from implementation of various R&D programs; (2) definition of value functions and attribute weightings needed for further transportation goals.
Shape Optimization of Rubber Bushing Using Differential Evolution Algorithm
2014-01-01
The objective of this study is to design rubber bushing at desired level of stiffness characteristics in order to achieve the ride quality of the vehicle. A differential evolution algorithm based approach is developed to optimize the rubber bushing through integrating a finite element code running in batch mode to compute the objective function values for each generation. Two case studies were given to illustrate the application of proposed approach. Optimum shape parameters of 2D bushing model were determined by shape optimization using differential evolution algorithm. PMID:25276848
Nonconvex Nonsmooth Low Rank Minimization via Iteratively Reweighted Nuclear Norm.
Lu, Canyi; Tang, Jinhui; Yan, Shuicheng; Lin, Zhouchen
2016-02-01
The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm-based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to use a family of nonconvex surrogates of L0-norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then, we propose to solve the problem by an iteratively re-weighted nuclear norm (IRNN) algorithm. IRNN iteratively solves a weighted singular value thresholding problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that the IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low rank matrix recovery compared with the state-of-the-art convex algorithms.
Memory Age Identity as a predictor of cognitive function in the elderly: A 2-year follow-up study.
Chang, Ki Jung; Hong, Chang Hyung; Lee, Yun Hwan; Chung, Young Ki; Lim, Ki Young; Noh, Jai Sung; Kim, Jin-Ju; Kim, Haena; Kim, Hyun-Chung; Son, Sang Joon
2018-01-01
There is a growing interest in finding psychosocial predictors related to cognitive function. In our previous research, we conducted a cross-sectional study on memory age identity (MAI) and found that MAI might be associated with objective cognitive performance in non-cognitively impaired elderly. A longitudinal study was conducted to better understand the importance of MAI as a psychosocial predictor related to objective cognitive function. Data obtained from 1345 Korean subjects aged 60 years and above were analyzed. During the two-year follow-up, subjective memory age was assessed on three occasions using the following question: How old do you feel based on your memory? Discrepancy between subjective memory age and chronological age was then calculated. We defined this value as 'memory age identity (MAI)'. A generalized estimating equation (GEE) was then obtained to demonstrate the relationship between MAI and Korean version-Mini Mental State Examination (K-MMSE) score over the 2 years of study. MAI was found to significantly (β=-0.03, p< 0.0001) predict objective cognitive performance in the non-cognitively impaired elderly. MAI may be a potential psychosocial predictor related to objective cognitive performance in the non-cognitively impaired elderly. Copyright © 2017 Elsevier B.V. All rights reserved.
Valuation of opportunity costs by rats working for rewarding electrical brain stimulation.
Solomon, Rebecca Brana; Conover, Kent; Shizgal, Peter
2017-01-01
Pursuit of one goal typically precludes simultaneous pursuit of another. Thus, each exclusive activity entails an "opportunity cost:" the forgone benefits from the next-best activity eschewed. The present experiment estimates, in laboratory rats, the function that maps objective opportunity costs into subjective ones. In an operant chamber, rewarding electrical brain stimulation was delivered when the cumulative time a lever had been depressed reached a criterion duration. The value of the activities forgone during this duration is the opportunity cost of the electrical reward. We determined which of four functions best describes how objective opportunity costs, expressed as the required duration of lever depression, are translated into their subjective equivalents. The simplest account is the identity function, which equates subjective and objective opportunity costs. A variant of this function called the "sigmoidal-slope function," converges on the identity function at longer durations but deviates from it at shorter durations. The sigmoidal-slope function has the form of a hockey stick. The flat "blade" denotes a range over which opportunity costs are subjectively equivalent; these durations are too short to allow substitution of more beneficial activities. The blade extends into an upward-curving portion over which costs become discriminable and finally into the straight "handle," over which objective and subjective costs match. The two remaining functions are based on hyperbolic and exponential temporal discounting, respectively. The results are best described by the sigmoidal-slope function. That this is so suggests that different principles of intertemporal choice are involved in the evaluation of time spent working for a reward or waiting for its delivery. The subjective opportunity-cost function plays a key role in the evaluation and selection of goals. An accurate description of its form and parameters is essential to successful modeling and prediction of instrumental performance and reward-related decision making.
Regional Management of an Aquifer for Mining Under Fuzzy Environmental Objectives
NASA Astrophysics Data System (ADS)
BogáRdi, IstváN.; BáRdossy, AndráS.; Duckstein, Lucien
1983-12-01
A methodology is developed for the dynamic multiobjective management of a multipurpose regional aquifer. In a case study of bauxite mining in Western Hungary, ore deposits are often under the piezometric level of a karstic aquifer, while this same aquifer also provides recharge flows for thermal springs. N + 1 objectives are to be minimized, the first one being total discounted cost of control by dewatering or grouting; the other N objectives consist of the flow of thermal springs at N control points. However, there is no agreement among experts as to a set of numerical values that would constitute a "sound environment"; for this reason a fuzzy set analysis is used, and the N environmental objectives are combined into a single fuzzy membership function. The constraints include ore availability, various capacities, and the state transition function that describes the behavior of both piezometric head and underground flow. The model is linearized and solved as a biobjective dynamic program by using multiobjective compromise programming. A numerical example with N = 2 appears to lead to realistic control policies. Extension of the model to the nonlinear case is discussed.
Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems
NASA Astrophysics Data System (ADS)
Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.
2017-01-01
A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.
Konova, Anna B.; Moeller, Scott J.; Tomasi, Dardo; Parvaz, Muhammad A.; Alia-Klein, Nelly; Volkow, Nora D.; Goldstein, Rita Z.
2012-01-01
Abnormalities in frontostriatal systems are thought to be central to the pathophysiology of addiction, and may underlie maladaptive processing of the highly generalizable reinforcer, money. Although abnormal frontostriatal structure and function have been observed in individuals addicted to cocaine, it is less clear how individual variability in brain structure is associated with brain function to influence behavior. Our objective was to examine frontostriatal structure and neural processing of money value in chronic cocaine users and closely matched healthy controls. A reward task that manipulated different levels of money was used to isolate neural activity associated with money value. Gray matter volume measures were used to assess frontostriatal structure. Our results indicated that cocaine users had an abnormal money value signal in the sensorimotor striatum (right putamen/globus pallidus) which was negatively associated with accuracy adjustments to money and was more pronounced in individuals with more severe use. In parallel, group differences were also observed in both function and gray matter volume of the ventromedial prefrontal cortex; in the cocaine users, the former was directly associated with response to money in the striatum. These results provide strong evidence for abnormalities in the neural mechanisms of valuation in addiction and link these functional abnormalities with deficits in brain structure. In addition, as value signals represent acquired associations, their abnormal processing in the sensorimotor striatum, a region centrally implicated in habit formation, could signal disadvantageous associative learning in cocaine addiction. PMID:22775285
Roca, Patricia; Mulas, Fernando; Gandia, Rubén; Ortiz-Sánchez, Pedro; Abad, Luis
2013-02-22
Evoked potentials P300 and the analysis of executive functions have shown their utility in the monitoring of patients with symptoms of attention deficit hyperactivity disorder (ADHD). Neuropsychological profiles and evoked potentials P300 have been analysed for two groups of children with an ADHD treatment with atomoxetine and methylphenidate respectively. Correlations between P300 and the selected neuropsychological parameters are studied, and the differences between basal values and 1 year follow-up are analysed. Two groups were performed: a group of 22 children ADHD in the atomoxetine condition, and a group of 24 children ADHD in the methylphenidate condition. The results show a global improvement of all the parameters, in terms of executive function and P300 values in both, the atomoxetine and the methylphenidate group. Executive functions and evoked potentials P300 reflect an underlying processing and they are very useful in the clinical practice. This exploratory study shows the importance of designing personalized objective variables-based treatments.
Hoffman, Donald D.; Prakash, Chetan
2014-01-01
Current models of visual perception typically assume that human vision estimates true properties of physical objects, properties that exist even if unperceived. However, recent studies of perceptual evolution, using evolutionary games and genetic algorithms, reveal that natural selection often drives true perceptions to extinction when they compete with perceptions tuned to fitness rather than truth: Perception guides adaptive behavior; it does not estimate a preexisting physical truth. Moreover, shifting from evolutionary biology to quantum physics, there is reason to disbelieve in preexisting physical truths: Certain interpretations of quantum theory deny that dynamical properties of physical objects have definite values when unobserved. In some of these interpretations the observer is fundamental, and wave functions are compendia of subjective probabilities, not preexisting elements of physical reality. These two considerations, from evolutionary biology and quantum physics, suggest that current models of object perception require fundamental reformulation. Here we begin such a reformulation, starting with a formal model of consciousness that we call a “conscious agent.” We develop the dynamics of interacting conscious agents, and study how the perception of objects and space-time can emerge from such dynamics. We show that one particular object, the quantum free particle, has a wave function that is identical in form to the harmonic functions that characterize the asymptotic dynamics of conscious agents; particles are vibrations not of strings but of interacting conscious agents. This allows us to reinterpret physical properties such as position, momentum, and energy as properties of interacting conscious agents, rather than as preexisting physical truths. We sketch how this approach might extend to the perception of relativistic quantum objects, and to classical objects of macroscopic scale. PMID:24987382
Henrie, Adam M; Wittstrom, Kristina; Delu, Adam; Deming, Paulina
2015-09-01
The objective of this study was to examine indicators of liver function and inflammation for prognostic value in predicting outcomes to yttrium-90 radioembolization (RE). In a retrospective analysis, markers of liver function and inflammation, biomarkers required to stage liver function and inflammation, and data regarding survival, tumor response, and progression after RE were recorded. Univariate regression models were used to investigate the prognostic value of liver biomarkers in predicting outcome to RE as measured by survival, tumor progression, and radiographic and biochemical tumor response. Markers from all malignancy types were analyzed together. A subgroup analysis was performed on markers from patients with metastatic colorectal cancer. A total of 31 patients received RE from 2004 to 2014. Median survival after RE for all malignancies combined was 13.6 months (95% CI: 6.7-17.6 months). Results from an exploratory analysis of patient data suggest that liver biomarkers, including albumin concentrations, international normalized ratio, bilirubin concentrations, and the model for end-stage liver disease score, possess prognostic value in predicting outcomes to RE.
Richardson, Jeff; McKie, John
2005-01-01
Economics is commonly defined in terms of the relationship between people's unlimited wants and society's scarce resources. The definition implies a central role for an understanding of what people want, i.e. their objectives. This, in turn, suggests an important role for both empirical research into people's objectives and debate about the acceptability of the objectives. In contrast with this expectation, economics has avoided these issues by the adoption of an orthodoxy that imposes objectives. However evidence suggests, at least in the health sector, that people do not have the simple objectives assumed by economic theory. Amartya Sen has advocated a shift from a focus on "utility" to a focus on "capabilities" and "functionings" as a way of overcoming the shortcomings of welfarism. However, the practicality of Sen's account is threatened by the range of possible "functionings", by the lack of guidance about how they should be weighted, and by suspicions that they do not capture the full range of objectives people appear to value. We argue that "empirical ethics", an emerging approach in the health sector, provides important lessons on overcoming these problems. Moreover, it is an ethically defensible methodology, and yields practical results that can assist policy makers in the allocation of resources.
ERIC Educational Resources Information Center
Ware, Iris
2017-01-01
The value proposition for learning and talent development (LTD) is often challenged due to human resources' inability to demonstrate meaningful outcomes in relation to organizational needs and return-on-investment. The primary role of human resources (HR) and the learning and talent development (LTD) function is to produce meaningful outcomes to…
Analysis of Docudrama Techniques and Negotiating One's Identity in David Edgar's "Pentecost"
ERIC Educational Resources Information Center
Al Sharadgeh, Samer Ziyad
2018-01-01
Edgar manages to invert the subordinate function of generally accepted objective indicators of membership of a particular national group--language, religion, common history, and territory--into the essential mode of imperative distinction shaping the unique national identity. In other words, it is the fresco and the value assigned to it that…
Learning Engines - A Functional Object Model for Developing Learning Resources for the WWW.
ERIC Educational Resources Information Center
Fritze, Paul; Ip, Albert
The Learning Engines (LE) model, developed at the University of Melbourne (Australia), supports the integration of rich learning activities into the World Wide Web. The model is concerned with the practical design, educational value, and reusability of software components. The model is focused on the academic teacher who is in the best position to…
The optimal design of UAV wing structure
NASA Astrophysics Data System (ADS)
Długosz, Adam; Klimek, Wiktor
2018-01-01
The paper presents an optimal design of UAV wing, made of composite materials. The aim of the optimization is to improve strength and stiffness together with reduction of the weight of the structure. Three different types of functionals, which depend on stress, stiffness and the total mass are defined. The paper presents an application of the in-house implementation of the evolutionary multi-objective algorithm in optimization of the UAV wing structure. Values of the functionals are calculated on the basis of results obtained from numerical simulations. Numerical FEM model, consisting of different composite materials is created. Adequacy of the numerical model is verified by results obtained from the experiment, performed on a tensile testing machine. Examples of multi-objective optimization by means of Pareto-optimal set of solutions are presented.
Structural optimization via a design space hierarchy
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1976-01-01
Mathematical programming techniques provide a general approach to automated structural design. An iterative method is proposed in which design is treated as a hierarchy of subproblems, one being locally constrained and the other being locally unconstrained. It is assumed that the design space is locally convex in the case of good initial designs and that the objective and constraint functions are continuous, with continuous first derivatives. A general design algorithm is outlined for finding a move direction which will decrease the value of the objective function while maintaining a feasible design. The case of one-dimensional search in a two-variable design space is discussed. Possible applications are discussed. A major feature of the proposed algorithm is its application to problems which are inherently ill-conditioned, such as design of structures for optimum geometry.
NEUROBIOLOGY OF ECONOMIC CHOICE: A GOOD-BASED MODEL
Padoa-Schioppa, Camillo
2012-01-01
Traditionally the object of economic theory and experimental psychology, economic choice recently became a lively research focus in systems neuroscience. Here I summarize the emerging results and I propose a unifying model of how economic choice might function at the neural level. Economic choice entails comparing options that vary on multiple dimensions. Hence, while choosing, individuals integrate different determinants into a subjective value; decisions are then made by comparing values. According to the good-based model, the values of different goods are computed independently of one another, which implies transitivity. Values are not learned as such, but rather computed at the time of choice. Most importantly, values are compared within the space of goods, independent of the sensori-motor contingencies of choice. Evidence from neurophysiology, imaging and lesion studies indicates that abstract representations of value exist in the orbitofrontal and ventromedial prefrontal cortices. The computation and comparison of values may thus take place within these regions. PMID:21456961
Context recognition for a hyperintensional inference machine
NASA Astrophysics Data System (ADS)
Duží, Marie; Fait, Michal; Menšík, Marek
2017-07-01
The goal of this paper is to introduce the algorithm of context recognition in the functional programming language TIL-Script, which is a necessary condition for the implementation of the TIL-Script inference machine. The TIL-Script language is an operationally isomorphic syntactic variant of Tichý's Transparent Intensional Logic (TIL). From the formal point of view, TIL is a hyperintensional, partial, typed λ-calculus with procedural semantics. Hyperintensional, because TIL λ-terms denote procedures (defined as TIL constructions) producing set-theoretic functions rather than the functions themselves; partial, because TIL is a logic of partial functions; and typed, because all the entities of TIL ontology, including constructions, receive a type within a ramified hierarchy of types. These features make it possible to distinguish three levels of abstraction at which TIL constructions operate. At the highest hyperintensional level the object to operate on is a construction (though a higher-order construction is needed to present this lower-order construction as an object of predication). At the middle intensional level the object to operate on is the function presented, or constructed, by a construction, while at the lowest extensional level the object to operate on is the value (if any) of the presented function. Thus a necessary condition for the development of an inference machine for the TIL-Script language is recognizing a context in which a construction occurs, namely extensional, intensional and hyperintensional context, in order to determine the type of an argument at which a given inference rule can be properly applied. As a result, our logic does not flout logical rules of extensional logic, which makes it possible to develop a hyperintensional inference machine for the TIL-Script language.
NASA Astrophysics Data System (ADS)
Howitt, R. E.
2016-12-01
Hydro-economic models have been used to analyze optimal supply management and groundwater use for the past 25 years. They are characterized by an objective function that usually maximizes economic measures such as consumer and producer surplus subject to hydrologic equations of motion or water distribution systems. The hydrologic and economic components are sometimes fully integrated. Alternatively they may use an iterative interactive process. Environmental considerations have been included in hydro-economic models as inequality constraints. Representing environmental requirements as constraints is a rigid approximation of the range of management alternatives that could be used to implement environmental objectives. The next generation of hydro-economic models, currently being developed, require that the environmental alternatives be represented by continuous or semi-continuous functions which relate water resource use allocated to the environment with the probabilities of achieving environmental objectives. These functions will be generated by process models of environmental and biological systems which are now advanced to the state that they can realistically represent environmental systems and flexibility to interact with economic models. Examples are crop growth models, climate modeling, and biological models of forest, fish, and fauna systems. These process models can represent environmental outcomes in a form that is similar to economic production functions. When combined with economic models the interacting process models can reproduce a range of trade-offs between economic and environmental objectives, and thus optimize social value of many water and environmental resources. Some examples of this next-generation of hydro-enviro- economic models are reviewed. In these models implicit production functions for environmental goods are combined with hydrologic equations of motion and economic response functions. We discuss models that show interaction between environmental goods and agricultural production, and others that address alternative climate change policies, or habitat provision.
Subgrid spatial variability of soil hydraulic functions for hydrological modelling
NASA Astrophysics Data System (ADS)
Kreye, Phillip; Meon, Günter
2016-07-01
State-of-the-art hydrological applications require a process-based, spatially distributed hydrological model. Runoff characteristics are demanded to be well reproduced by the model. Despite that, the model should be able to describe the processes at a subcatchment scale in a physically credible way. The objective of this study is to present a robust procedure to generate various sets of parameterisations of soil hydraulic functions for the description of soil heterogeneity on a subgrid scale. Relations between Rosetta-generated values of saturated hydraulic conductivity (Ks) and van Genuchten's parameters of soil hydraulic functions were statistically analysed. An universal function that is valid for the complete bandwidth of Ks values could not be found. After concentrating on natural texture classes, strong correlations were identified for all parameters. The obtained regression results were used to parameterise sets of hydraulic functions for each soil class. The methodology presented in this study is applicable on a wide range of spatial scales and does not need input data from field studies. The developments were implemented into a hydrological modelling system.
NASA Astrophysics Data System (ADS)
Darvishvand, Leila; Kamkari, Babak; Kowsary, Farshad
2018-03-01
In this article, a new hybrid method based on the combination of the genetic algorithm (GA) and artificial neural network (ANN) is developed to optimize the design of three-dimensional (3-D) radiant furnaces. A 3-D irregular shape design body (DB) heated inside a 3-D radiant furnace is considered as a case study. The uniform thermal conditions on the DB surfaces are obtained by minimizing an objective function. An ANN is developed to predict the objective function value which is trained through the data produced by applying the Monte Carlo method. The trained ANN is used in conjunction with the GA to find the optimal design variables. The results show that the computational time using the GA-ANN approach is significantly less than that of the conventional method. It is concluded that the integration of the ANN with GA is an efficient technique for optimization of the radiant furnaces.
Interpretations of Probability in Quantum Mechanics: A Case of "Experimental Metaphysics"
NASA Astrophysics Data System (ADS)
Hellman, Geoffrey
After reviewing paradigmatic cases of "experimental metaphysics" basing inferences against local realism and determinism on experimental tests of Bells theorem (and successors), we concentrate on clarifying the meaning and status of "objective probability" in quantum mechanics. The terms "objective" and "subjective" are found ambiguous and inadequate, masking crucial differences turning on the question of what the numerical values of probability functions measure vs. the question of the nature of the "events" on which such functions are defined. This leads naturally to a 2×2 matrix of types of interpretations, which are then illustrated with salient examples. (Of independent interest are the splitting of "Copenhagen interpretation" into "objective" and "subjective" varieties in one of the dimensions and the splitting of Bohmian hidden variables from (other) modal interpretations along that same dimension.) It is then explained why Everett interpretations are difficult to categorize in these terms. Finally, we argue that Bohmian mechanics does not seriously threaten the experimental-metaphysical case for ultimate randomness and purely physical probabilities.
Kim, Hyoung F.; Hikosaka, Okihide
2013-01-01
A goal-directed action aiming at an incentive outcome, if repeated, becomes a skill that may be initiated automatically. We now report that the tail of the caudate nucleus (CDt) may serve to control a visuomotor skill. Monkeys looked at many fractal objects, half of which were always associated with a large reward (high-valued objects) and the other half with a small reward (low-valued objects). After several daily sessions, they developed a gaze bias, looking at high-valued objects even when no reward was associated. CDt neurons developed a response bias, typically showing stronger responses to high-valued objects. In contrast, their responses showed no change when object values were reversed frequently, although monkeys showed a strong gaze bias, looking at high-valued objects in a goal-directed manner. The biased activity of CDt neurons may be transmitted to the oculomotor region so that animals can choose high-valued objects automatically based on stable reward experiences. PMID:23825426
[Environmental impact assessment of the land use change in china based on ecosystem service value].
Ran, Sheng-hong; Lü, Chang-he; Jia, Ke-jing; Qi, Yong-hua
2006-10-01
The environmental impact of land use change is long-term and cumulative. The ecosystem service change results from land use change. Therefore, the ecosystem service function change is the key object in the environmental impact assessment of land use change. According to the specific situation of China, this paper adjusted the unit ecosystem service value of different land use types. Based on this, the ecosystem service value change of different provinces in China resulted from the land use change since the implementation of the last plan of land use (1997-2010) was analyzed. The results show that the ecosystem service value in China increased 0.91% from 1996 to 2004. Thereinto, Tianjin is the province that the ecosystem service value increased most quickly, which was 5.69% from 1996 to 2004, while Shanghai is the province that the value decreased most quickly, which was 9.79%. Furthermore, the change of 17 types of ecosystem services was analyzed. Among them, the climate regulation function enhanced 3.43% from 1996 to 2004 and the biology resource control was weakened by 2.26% in this period. The results also indicate that the increase of the area of water surface and forest is the main reason for why the ecosystem service value increased in China in that period.
Nijran, Kuldip S; Houston, Alex S; Fleming, John S; Jarritt, Peter H; Heikkinen, Jari O; Skrypniuk, John V
2014-07-01
In this second UK audit of quantitative parameters obtained from renography, phantom simulations were used in cases in which the 'true' values could be estimated, allowing the accuracy of the parameters measured to be assessed. A renal physical phantom was used to generate a set of three phantom simulations (six kidney functions) acquired on three different gamma camera systems. A total of nine phantom simulations and three real patient studies were distributed to UK hospitals participating in the audit. Centres were asked to provide results for the following parameters: relative function and time-to-peak (whole kidney and cortical region). As with previous audits, a questionnaire collated information on methodology. Errors were assessed as the root mean square deviation from the true value. Sixty-one centres responded to the audit, with some hospitals providing multiple sets of results. Twenty-one centres provided a complete set of parameter measurements. Relative function and time-to-peak showed a reasonable degree of accuracy and precision in most UK centres. The overall average root mean squared deviation of the results for (i) the time-to-peak measurement for the whole kidney and (ii) the relative function measurement from the true value was 7.7 and 4.5%, respectively. These results showed a measure of consistency in the relative function and time-to-peak that was similar to the results reported in a previous renogram audit by our group. Analysis of audit data suggests a reasonable degree of accuracy in the quantification of renography function using relative function and time-to-peak measurements. However, it is reasonable to conclude that the objectives of the audit could not be fully realized because of the limitations of the mechanical phantom in providing true values for renal parameters.
Open-Filter Optical SSA Analysis Considerations
NASA Astrophysics Data System (ADS)
Lambert, J.
2016-09-01
Optical Space Situational Awareness (SSA) sensors used for space object detection and orbit refinement measurements are typically operated in an "open-filter" mode without any spectral filters to maximize sensitivity and signal-to-noise. These same optical brightness measurements are often also employed for size determination (e.g., for orbital debris), object correlation, and object status change. These functions, especially when performed using multiple sensors, are highly dependent on sensor calibration for measurement accuracy. Open-filter SSA sensors are traditionally calibrated against the cataloged visual magnitudes of solar-type stars which have similar spectral distributions as the illuminating source, the Sun. The stellar calibration is performed to a high level of accuracy, a few hundredths of a magnitude, by observing many stars over a range of elevation angles to determine sensor, telescope, and atmospheric effects. However, space objects have individual color properties which alter the reflected solar illumination producing spectral distributions which differ from those of the calibration stars. When the stellar calibrations are applied to the space object measurements, visual magnitude values are obtained which are systematically biased. These magnitudes combined with the unknown Bond albedos of the space objects result in systematically biased size determinations which will differ between sensors. Measurements of satellites of known sizes and surface materials have been analyzed to characterize these effects. The results have combined into standardized Bond albedos to correct the measured magnitudes into object sizes. However, the actual albedo values will vary between objects and represent a mean correction subject to some uncertainty. The objective of this discussion is to characterize the sensor spectral biases that are present in open-filter optical observations and examine the resulting brightness and albedo uncertainties that should accompany object size, correlation, or status change determinations, especially in the SSA analyses of individual space objects using data from multiple sensors.
Persichetti, Andrew S; Aguirre, Geoffrey K; Thompson-Schill, Sharon L
2015-05-01
A central concern in the study of learning and decision-making is the identification of neural signals associated with the values of choice alternatives. An important factor in understanding the neural correlates of value is the representation of the object itself, separate from the act of choosing. Is it the case that the representation of an object within visual areas will change if it is associated with a particular value? We used fMRI adaptation to measure the neural similarity of a set of novel objects before and after participants learned to associate monetary values with the objects. We used a range of both positive and negative values to allow us to distinguish effects of behavioral salience (i.e., large vs. small values) from effects of valence (i.e., positive vs. negative values). During the scanning session, participants made a perceptual judgment unrelated to value. Crucially, the similarity of the visual features of any pair of objects did not predict the similarity of their value, so we could distinguish adaptation effects due to each dimension of similarity. Within early visual areas, we found that value similarity modulated the neural response to the objects after training. These results show that an abstract dimension, in this case, monetary value, modulates neural response to an object in visual areas of the brain even when attention is diverted.
Analyser-based phase contrast image reconstruction using geometrical optics.
Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A
2007-07-21
Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 microm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.
Separation of solids by varying the bulk density of a fluid separating medium
Peterson, Palmer L.; Duffy, James B.; Tokarz, Richard D.
1978-01-01
A method and apparatus for separating objects having a density greater than a selected density value from objects having a density less than said selected density value. The method typically comprises: (a) providing a separation vessel having an upper and lower portion, said vessel containing a liquid having a density exceeding said selected density value; (b) reducing the apparent density of the liquid to said selected density value by introducing solid, bubble-like bodies having a density less than that of the liquid into the lower portion of the vessel and permitting them to rise therethrough; (c) introducing the objects to be separated into the separation vessel and permitting the objects having a density greater than the apparent density of the liquid to sink to the lower portion of the vessel, while the objects having a density less than said selected density value float in the upper portion of the vessel; and (d) separately removing the higher density objects in the lower portion and the lower density objects in the upper portion from the separation vessel. The apparatus typically comprises: (a) a vessel containing a liquid having a density such that at least part of said objects having a density exceeding said selected density value will float therein; (b) means to place said objects into said vessel; (c) means to reduce the effective density of at least a portion of said liquid to said selected density value, whereby said objects having a density exceeding said selected density value sink into said liquid and said objects having a density less than said selected density value remain afloat, said means to adjust the effective density comprising solid, bubble-like bodies having a density less than said selected density value and means for introducing said bodies into said liquid; and (d) means for separately removing said objects having a density exceeding said selected density value and said objects having a density less than said selected density value from said vessel.
Korsager, Leise Elisabeth Hviid; Schmidt, Jesper Hvass; Faber, Christian; Wanscher, Jens Højberg
2016-12-01
The vHIT (video head impulse test) investigates the vestibular function in two ways: a VOR (vestibulo-ocular reflex) gain value and a head impulse diagram. From the diagram covert and overt saccades can be detected. Evaluation of the vestibular function based on vHIT depends on both parameters. There is a lack of knowledge regarding the reliability of the two parameters. The objective was to investigate the reliability of vHIT by comparing gain values between examiners on the same subjects, and to see how differences affected the occurrence of saccades. 25 subjects who had undergone cochlear implant (CI) surgery. Subjects were tested using the vHIT by two of four different examiners. Two judges interpreted the occurrence of saccades in the diagram. VOR gain values and the occurrence of saccades in the diagram. Differences in gain values between examiners varied from 0.2 to 0.58 with an average of 0.14 (95 % CI 0.12-0.16) on the right ear and 0.17 (95 % CI 0.15-0.19) on the left ear. Occurrences of saccades in the same patient were reproduced in 93 % of the cases by all examiners. Kappa's coefficient on the occurrence of saccades was 0.83. Interclass correlation coefficient (ICC) of the gain values between examiners ranged from 0.62 to 0.70. Differences in gain values amongst examiners did not seem to affect the occurrence of saccades in the same patient. The occurrence of saccades, therefore, seems to be more reliable than the gain value in the evaluation of the vestibular function. Interpretation of vHIT results should, therefore, first depend on the occurrence of saccades and second on the gain value.
Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.
2016-02-02
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less
A Fuzzy Robust Optimization Model for Waste Allocation Planning Under Uncertainty
Xu, Ye; Huang, Guohe; Xu, Ling
2014-01-01
Abstract In this study, a fuzzy robust optimization (FRO) model was developed for supporting municipal solid waste management under uncertainty. The Development Zone of the City of Dalian, China, was used as a study case for demonstration. Comparing with traditional fuzzy models, the FRO model made improvement by considering the minimization of the weighted summation among the expected objective values, the differences between two extreme possible objective values, and the penalty of the constraints violation as the objective function, instead of relying purely on the minimization of expected value. Such an improvement leads to enhanced system reliability and the model becomes especially useful when multiple types of uncertainties and complexities are involved in the management system. Through a case study, the applicability of the FRO model was successfully demonstrated. Solutions under three future planning scenarios were provided by the FRO model, including (1) priority on economic development, (2) priority on environmental protection, and (3) balanced consideration for both. The balanced scenario solution was recommended for decision makers, since it respected both system economy and reliability. The model proved valuable in providing a comprehensive profile about the studied system and helping decision makers gain an in-depth insight into system complexity and select cost-effective management strategies. PMID:25317037
A Fuzzy Robust Optimization Model for Waste Allocation Planning Under Uncertainty.
Xu, Ye; Huang, Guohe; Xu, Ling
2014-10-01
In this study, a fuzzy robust optimization (FRO) model was developed for supporting municipal solid waste management under uncertainty. The Development Zone of the City of Dalian, China, was used as a study case for demonstration. Comparing with traditional fuzzy models, the FRO model made improvement by considering the minimization of the weighted summation among the expected objective values, the differences between two extreme possible objective values, and the penalty of the constraints violation as the objective function, instead of relying purely on the minimization of expected value. Such an improvement leads to enhanced system reliability and the model becomes especially useful when multiple types of uncertainties and complexities are involved in the management system. Through a case study, the applicability of the FRO model was successfully demonstrated. Solutions under three future planning scenarios were provided by the FRO model, including (1) priority on economic development, (2) priority on environmental protection, and (3) balanced consideration for both. The balanced scenario solution was recommended for decision makers, since it respected both system economy and reliability. The model proved valuable in providing a comprehensive profile about the studied system and helping decision makers gain an in-depth insight into system complexity and select cost-effective management strategies.
Steady-State ALPS for Real-Valued Problems
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2009-01-01
The two objectives of this paper are to describe a steady-state version of the Age-Layered Population Structure (ALPS) Evolutionary Algorithm (EA) and to compare it against other GAs on real-valued problems. Motivation for this work comes from our previous success in demonstrating that a generational version of ALPS greatly improves search performance on a Genetic Programming problem. In making steady-state ALPS some modifications were made to the method for calculating age and the method for moving individuals up layers. To demonstrate that ALPS works well on real-valued problems we compare it against CMA-ES and Differential Evolution (DE) on five challenging, real-valued functions and on one real-world problem. While CMA-ES and DE outperform ALPS on the two unimodal test functions, ALPS is much better on the three multimodal test problems and on the real-world problem. Further examination shows that, unlike the other GAs, ALPS maintains a genotypically diverse population throughout the entire search process. These findings strongly suggest that the ALPS paradigm is better able to avoid premature convergence then the other GAs.
Toys are me: children's extension of self to objects.
Diesendruck, Gil; Perez, Reut
2015-01-01
Adults tend to believe that objects can function as extensions of people's selves. This belief has been demonstrated in that changes to people's sense of self affect their attachment to personally valuable objects, and vice-versa. Here we tested the development of this belief. In Study 1 we found that manipulating 5-year-olds' self-worth via positive or negative feedback on performance, affected their willingness to part with personally valuable objects, but had no effect vis-à-vis non-valuable objects. In Study 2 we found that 9-, but not 5-year-olds were more willing to give a personally valuable object to someone morally repulsive after the object had been cleaned of all remnants of the child's self, than before. Study 2b showed an analogous effect in 5-year-olds' willingness to receive an object from someone morally repulsive. These findings intimate that the extension of self to objects via contagion may derive not only from cultural values such as consumerism, materialism, or individualism, but also from basic human needs. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chintalapudi, V. S.; Sirigiri, Sivanagaraju
2017-04-01
In power system restructuring, pricing the electrical power plays a vital role in cost allocation between suppliers and consumers. In optimal power dispatch problem, not only the cost of active power generation but also the costs of reactive power generated by the generators should be considered to increase the effectiveness of the problem. As the characteristics of reactive power cost curve are similar to that of active power cost curve, a nonconvex reactive power cost function is formulated. In this paper, a more realistic multi-fuel total cost objective is formulated by considering active and reactive power costs of generators. The formulated cost function is optimized by satisfying equality, in-equality and practical constraints using the proposed uniform distributed two-stage particle swarm optimization. The proposed algorithm is a combination of uniform distribution of control variables (to start the iterative process with good initial value) and two-stage initialization processes (to obtain best final value in less number of iterations) can enhance the effectiveness of convergence characteristics. Obtained results for the considered standard test functions and electrical systems indicate the effectiveness of the proposed algorithm and can obtain efficient solution when compared to existing methods. Hence, the proposed method is a promising method and can be easily applied to optimize the power system objectives.
Miranda, Ana; Colomer, Carla; Mercader, Jessica; Fernández, M Inmaculada; Presentación, M Jesús
2015-01-01
The early assessment of the executive processes using ecologically valid instruments is essential for identifying deficits and planning actions to deal with possible adverse consequences. The present study has two different objectives. The first objective is to analyze the relationship between preschoolers' performance on tests of Working Memory and Inhibition and parents' and teachers' ratings of these executive functions (EFs) using the Behavior Rating Inventory of Executive Function (BRIEF). The second objective consists of studying the predictive value of the different EF measures (performance-based test and rating scales) on Inattention and Hyperactivity/Impulsivity behaviors and on indicators of word reading performance. The participants in the study were 209 children in the last year of preschool, their teachers and their families. Performance-based tests of Working Memory and Inhibition were administered, as well as word reading measures (accuracy and speed). The parents and teachers filled out rating scales of the EF and typical behaviors of attention deficit hyperactivity disorder (ADHD) symptomatology. Moderate correlation values were found between the different EF assessments procedures, although the results varied depending on the different domains. Metacognition Index from the BRIEF presented stronger correlations with verbal working memory tests than with inhibition tests. Both the rating scales and the performance-based tests were significant predictors of Inattention and Hyperactivity/Impulsivity behaviors and the reading achievement measures. However, the BRIEF explained a greater percentage of variance in the case of the ADHD symptomatology, while the performance-based tests explained reading achievement to a greater degree. The implications of the findings for research and clinical practice are discussed.
Miranda, Ana; Colomer, Carla; Mercader, Jessica; Fernández, M. Inmaculada; Presentación, M. Jesús
2015-01-01
The early assessment of the executive processes using ecologically valid instruments is essential for identifying deficits and planning actions to deal with possible adverse consequences. The present study has two different objectives. The first objective is to analyze the relationship between preschoolers’ performance on tests of Working Memory and Inhibition and parents’ and teachers’ ratings of these executive functions (EFs) using the Behavior Rating Inventory of Executive Function (BRIEF). The second objective consists of studying the predictive value of the different EF measures (performance-based test and rating scales) on Inattention and Hyperactivity/Impulsivity behaviors and on indicators of word reading performance. The participants in the study were 209 children in the last year of preschool, their teachers and their families. Performance-based tests of Working Memory and Inhibition were administered, as well as word reading measures (accuracy and speed). The parents and teachers filled out rating scales of the EF and typical behaviors of attention deficit hyperactivity disorder (ADHD) symptomatology. Moderate correlation values were found between the different EF assessments procedures, although the results varied depending on the different domains. Metacognition Index from the BRIEF presented stronger correlations with verbal working memory tests than with inhibition tests. Both the rating scales and the performance-based tests were significant predictors of Inattention and Hyperactivity/Impulsivity behaviors and the reading achievement measures. However, the BRIEF explained a greater percentage of variance in the case of the ADHD symptomatology, while the performance-based tests explained reading achievement to a greater degree. The implications of the findings for research and clinical practice are discussed. PMID:25972833
A proposal to classify happiness as a psychiatric disorder.
Bentall, R P
1992-01-01
It is proposed that happiness be classified as a psychiatric disorder and be included in future editions of the major diagnostic manuals under the new name: major affective disorder, pleasant type. In a review of the relevant literature it is shown that happiness is statistically abnormal, consists of a discrete cluster of symptoms, is associated with a range of cognitive abnormalities, and probably reflects the abnormal functioning of the central nervous system. One possible objection to this proposal remains--that happiness is not negatively valued. However, this objection is dismissed as scientifically irrelevant. PMID:1619629
Pilot study comparing market orientation culture of businesses and schools of business.
Harmon, Harry A; Webster, Robert L; Hammond, Kevin L
2003-08-01
A market orientation culture has been described as one that blends an organization's commitment to customer value with a process of continuously creating superior value for customers. Developing such a culture is further described as (1) obtaining information about customers, competitors, and markets, (2) examining the gathered information from a total organizational perspective, (3) deciding how to deliver superior customer value, and (4) implementing actions to provide value to customers. A market orientation culture focuses on the customer, identifies issues in the competitive environment, and coordinates all functional areas to achieve organizational objectives. Research has found businesses with higher market orientation are more successful in achieving organizational objectives. The measurement of market orientation within businesses has been empirically tested and validated. However, empirical research on market orientation in nonprofit organizations such as universities has not been examined. This study investigated market orientation within the university setting, specifically Schools of Business Administration, and compared these data with previously published data within the business sector. Data for comparative purposes were collected via a national survey. Hypothesis testing was conducted. Results indicated significantly lower market orientation culture within the schools of business as reported by AACSB Business School Deans vis-à-vis managers of business enterprises.
Enabling quaternion derivatives: the generalized HR calculus
Xu, Dongpo; Jahanchahi, Cyrus; Took, Clive C.; Mandic, Danilo P.
2015-01-01
Quaternion derivatives exist only for a very restricted class of analytic (regular) functions; however, in many applications, functions of interest are real-valued and hence not analytic, a typical case being the standard real mean square error objective function. The recent HR calculus is a step forward and provides a way to calculate derivatives and gradients of both analytic and non-analytic functions of quaternion variables; however, the HR calculus can become cumbersome in complex optimization problems due to the lack of rigorous product and chain rules, a consequence of the non-commutativity of quaternion algebra. To address this issue, we introduce the generalized HR (GHR) derivatives which employ quaternion rotations in a general orthogonal system and provide the left- and right-hand versions of the quaternion derivative of general functions. The GHR calculus also solves the long-standing problems of product and chain rules, mean-value theorem and Taylor's theorem in the quaternion field. At the core of the proposed GHR calculus is quaternion rotation, which makes it possible to extend the principle to other functional calculi in non-commutative settings. Examples in statistical learning theory and adaptive signal processing support the analysis. PMID:26361555
Enabling quaternion derivatives: the generalized HR calculus.
Xu, Dongpo; Jahanchahi, Cyrus; Took, Clive C; Mandic, Danilo P
2015-08-01
Quaternion derivatives exist only for a very restricted class of analytic (regular) functions; however, in many applications, functions of interest are real-valued and hence not analytic, a typical case being the standard real mean square error objective function. The recent HR calculus is a step forward and provides a way to calculate derivatives and gradients of both analytic and non-analytic functions of quaternion variables; however, the HR calculus can become cumbersome in complex optimization problems due to the lack of rigorous product and chain rules, a consequence of the non-commutativity of quaternion algebra. To address this issue, we introduce the generalized HR (GHR) derivatives which employ quaternion rotations in a general orthogonal system and provide the left- and right-hand versions of the quaternion derivative of general functions. The GHR calculus also solves the long-standing problems of product and chain rules, mean-value theorem and Taylor's theorem in the quaternion field. At the core of the proposed GHR calculus is quaternion rotation, which makes it possible to extend the principle to other functional calculi in non-commutative settings. Examples in statistical learning theory and adaptive signal processing support the analysis.
The influence of lifestyle on health behavior and preference for functional foods.
Szakály, Zoltán; Szente, Viktória; Kövér, György; Polereczki, Zsolt; Szigeti, Orsolya
2012-02-01
The main objective of this survey is to reveal the relationship between lifestyle, health behavior, and the consumption of functional foods on the basis of Grunert's food-related lifestyle model. In order to achieve this objective, a nationwide representative questionnaire-based survey was launched with 1000 participants in Hungary. The results indicate that a Hungarian consumer makes rational decisions, he or she seeks bargains, and he wants to know whether or not he gets good value for his money. Further on, various lifestyle segments are defined by the authors: the rational, uninvolved, conservative, careless, and adventurous consumer segments. Among these, consumers with a rational approach provide the primary target group for the functional food market, where health consciousness and moderate price sensitivity can be observed together. Adventurous food consumers stand out because they search for novelty; this makes them an equally important target group. Conservative consumers are another, one characterized by positive health behavior. According to the findings of the research, there is a significant relationship between lifestyle, health behavior, and the preference for functional food products. Copyright © 2011 Elsevier Ltd. All rights reserved.
Konova, Anna B; Moeller, Scott J; Tomasi, Dardo; Parvaz, Muhammad A; Alia-Klein, Nelly; Volkow, Nora D; Goldstein, Rita Z
2012-10-01
Abnormalities in frontostriatal systems are thought to be central to the pathophysiology of addiction, and may underlie the maladaptive processing of the highly generalizable reinforcer, money. Although abnormal frontostriatal structure and function have been observed in individuals addicted to cocaine, it is less clear how individual variability in brain structure is associated with brain function to influence behavior. Our objective was to examine frontostriatal structure and neural processing of money value in chronic cocaine users and closely matched healthy controls. A reward task that manipulated different levels of money was used to isolate neural activity associated with money value. Gray matter volume measures were used to assess frontostriatal structure. Our results indicated that cocaine users had an abnormal money value signal in the sensorimotor striatum (right putamen/globus pallidus) that was negatively associated with accuracy adjustments to money and was more pronounced in individuals with more severe use. In parallel, group differences were also observed in both the function and gray matter volume of the ventromedial prefrontal cortex; in the cocaine users, the former was directly associated with response to money in the striatum. These results provide strong evidence for abnormalities in the neural mechanisms of valuation in addiction and link these functional abnormalities with deficits in brain structure. In addition, as value signals represent acquired associations, their abnormal processing in the sensorimotor striatum, a region centrally implicated in habit formation, could signal disadvantageous associative learning in cocaine addiction. © 2012 Published 2012. This article is a US Government work and is in the public domain in the USA.
Pearman, Timothy; Yanez, Betina; Peipert, John; Wortman, Katy; Beaumont, Jennifer; Cella, David
2014-09-15
Health-related quality of life (HRQOL) measures are commonly used in oncology research. Interest in their use for monitoring or screening is increasing. The Functional Assessment of Cancer Therapy (FACT) is one of the most widely used HRQOL instruments. Consequently, oncology researchers and practitioners have an increasing need for reference values for the Functional Assessment of Cancer Therapy-General (FACT-G) and its 7-item rapid version, the Functional Assessment of Cancer Therapy-General 7 (FACT-G7), to compare FACT scores across specific subgroups of patients in research trials and practice. The objectives of this study are to provide 1) reference values from a sample of the general US adult population and a sample of adults diagnosed with cancer and 2) cutoff scores for quality of life. A sample of the general US population (N = 1075) and a sample of patients with cancer from 12 studies (N = 5065) were analyzed. Cutoff scores were established using distribution- and anchor-based methods. Mean values for the cancer sample were analyzed by performance status, cancer type, and disease status. Also, t tests and established criteria for meaningful differences were used to compare values. FACT-G and FACT-G7 scores in the general US population sample and cancer sample were generally comparable. Among the sample of patients with cancer, FACT-G and FACT-G7 scores worsened with declining performance status and increasing disease status. These data will aid interpretation of the magnitude and meaning of FACT scores, and allow for comparisons of scores across studies. © 2014 American Cancer Society.
An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model
NASA Astrophysics Data System (ADS)
Tiernan, E. D.; Hodges, B. R.
2017-12-01
The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.
NASA Astrophysics Data System (ADS)
Buchanan, James L.; Gilbert, Robert P.; Ou, Miao-jung Y.
2011-12-01
Estimating the parameters of an elastic or poroelastic medium from reflected or transmitted acoustic data is an important but difficult problem. Use of the Nelder-Mead simplex method to minimize an objective function measuring the discrepancy between some observable and its value calculated from a model for a trial set of parameters has been tried by several authors. In this paper, the difficulty with this direct approach, which is the existence of numerous local minima of the objective function, is documented for the in vitro experiment in which a specimen in a water tank is subject to an ultrasonic pulse. An indirect approach, based on the numerical solution of the equations for a set of ‘effective’ velocities and transmission coefficients, is then observed empirically to ameliorate the difficulties posed by the direct approach.
Barros, Marcos Alexandre; Cervone, Gabriel Lopes de Faria; Costa, André Luis Serigatti
2015-01-01
Objective To objectively and subjectively evaluate the functional result from before to after surgery among patients with a diagnosis of an isolated avulsion fracture of the posterior cruciate ligament who were treated surgically. Method Five patients were evaluated by means of reviewing the medical files, applying the Lysholm questionnaire, physical examination and radiological examination. For the statistical analysis, a significance level of 0.10 and 95% confidence interval were used. Results According to the Lysholm criteria, all the patients were classified as poor (<64 points) before the operation and evolved to a mean of 96 points six months after the operation. We observed that 100% of the posterior drawer cases became negative, taking values less than 5 mm to be negative. Conclusion Surgical methods with stable fixation for treating avulsion fractures at the tibial insertion of the posterior cruciate ligament produce acceptable functional results from the surgical and radiological points of view, with a significance level of 0.042. PMID:27218073
Information Filtering via a Scaling-Based Function
Qiu, Tian; Zhang, Zi-Ke; Chen, Guang
2013-01-01
Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem. PMID:23696829
Kirby, Jessica L; Houston, Megan N; Gabriner, Michael L; Hoch, Matthew C
2016-08-01
Individuals with chronic ankle instability (CAI) have demonstrated alterations in ankle mechanics and deficits in sensory function. However, relationships between mechanical stability and somatosensory function have not been examined, nor have those between somatosensory function and injury history characteristics. Therefore, the objective of this study was to examine relationships between (1) somatosensory function and mechanical stability and (2) somatosensory function and injury history characteristics. Forty adults with CAI volunteered to participate. In a single testing session, participants completed mechanical and sensory assessments in a counterbalanced order. Dependent variables included anterior/posterior displacement (mm), inversion/eversion rotation (°), SWM index values, JPS absolute error (°), number of previous ankle sprains, and number of "giving way" episodes in the previous 3 months. Spearman's Rho correlations examined the relationships between somatosensory function and (1) mechanical stability and (2) injury history characteristics (p<0.05). No significant correlations were identified between any variables (p>0.11), and all r-values were considered weak. These results revealed somatosensory function was not significantly correlated to mechanical stability or injury history characteristics. This indicates peripheral sensory impairments associated with CAI are likely caused by factors other than mechanical stability and injury history characteristics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tradeoff studies in multiobjective insensitive design of airplane control systems
NASA Technical Reports Server (NTRS)
Schy, A. A.; Giesy, D. P.
1983-01-01
A computer aided design method for multiobjective parameter-insensitive design of airplane control systems is described. Methods are presented for trading off nominal values of design objectives against sensitivities of the design objectives to parameter uncertainties, together with guidelines for designer utilization of the methods. The methods are illustrated by application to the design of a lateral stability augmentation system for two supersonic flight conditions of the Shuttle Orbiter. Objective functions are conventional handling quality measures and peak magnitudes of control deflections and rates. The uncertain parameters are assumed Gaussian, and numerical approximations of the stochastic behavior of the objectives are described. Results of applying the tradeoff methods to this example show that stochastic-insensitive designs are distinctly different from deterministic multiobjective designs. The main penalty for achieving significant decrease in sensitivity is decreased speed of response for the nominal system.
Technical Limitations in Merging Secular and Sacred Functions in Monumental Churches
NASA Astrophysics Data System (ADS)
Piatkowska, Ksenia
2017-10-01
The abandonment of churches and their adaptation for secular purposes is a current subject in Europe and worldwide. Most cases involve objects that were desacralized and then rebuilt as a whole object for alternative functions. Thus far, the merging of secular and sacred functions in one monumental Catholic church has not raised any issues. The paper describes the case of St. Catherine’s Church in Gdansk, Poland, where sacred function exists parallel to the new secular function being implemented. The study is based on the authentic, professional experience of the author. It describes the technical limitations arising from the need to ensure destinies for the optimal conditions of both sacred and secular function, while avoiding undesirable interference between them. The author further identifies architectural solutions most relevant to current requirements for protection of sacred zones in the church, for preservation of the monument, and for optimal function of a modern science museum. Significant design issues include: the inviolability of the sacred zone, preservation of the historical value of the monument, proper operation of new secular zones in compliance with contemporary standards of safety, performance of the assumed mission and profitability. The research indicates specific areas where the highest probability of collision exists between the sacred and profane and where technical problems are likely to occur.
A Tool for the Automated Design and Evaluation of Habitat Interior Layouts
NASA Technical Reports Server (NTRS)
Simon, Matthew A.; Wilhite, Alan W.
2013-01-01
The objective of space habitat design is to minimize mass and system size while providing adequate space for all necessary equipment and a functional layout that supports crew health and productivity. Unfortunately, development and evaluation of interior layouts is often ignored during conceptual design because of the subjectivity and long times required using current evaluation methods (e.g., human-in-the-loop mockup tests and in-depth CAD evaluations). Early, more objective assessment could prevent expensive design changes that may increase vehicle mass and compromise functionality. This paper describes a new interior design evaluation method to enable early, structured consideration of habitat interior layouts. This interior layout evaluation method features a comprehensive list of quantifiable habitat layout evaluation criteria, automatic methods to measure these criteria from a geometry model, and application of systems engineering tools and numerical methods to construct a multi-objective value function measuring the overall habitat layout performance. In addition to a detailed description of this method, a C++/OpenGL software tool which has been developed to implement this method is also discussed. This tool leverages geometry modeling coupled with collision detection techniques to identify favorable layouts subject to multiple constraints and objectives (e.g., minimize mass, maximize contiguous habitable volume, maximize task performance, and minimize crew safety risks). Finally, a few habitat layout evaluation examples are described to demonstrate the effectiveness of this method and tool to influence habitat design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less
The Hubble relation for nonstandard candles and the origin of the redshift of quasars
NASA Technical Reports Server (NTRS)
Petrosian, V.
1974-01-01
It is shown that the magnitude-log (redshift) relation for brightest quasars can have a slope different from the value expected for standard candles. The value of this slope depends on the luminosity function and its evolution. Therefore the difference of this slope from the expected value cannot be used as evidence against the cosmological origin of the redshift of the quasars. It is shown that the observed variation of the luminosity of the brightest objects with redshift is consistent with the cosmological hypothesis and that it agrees with (and perhaps could be used to complement) the luminosity function obtained from V/Vm analysis. It is also shown that the nonzero slope of the magnitude-log (redshift) relation rules out the local quasar hypothesis, where it is assumed that the sources are nearby (less than 500 Mpc), that the bulk of their redshift is intrinsic, and that there is no dependence on distance of the intrinsic properties of the sources.
Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak
2013-01-01
The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Xie, Hongwu; Xu, Fangming; Chen, Rixin; Luo, Tianyou; Chen, Mingren; Fang, Weidong; Lü, Fajin; Wu, Fei; Song, Yune; Xiong, Jun
2013-04-01
Functional magnetic resonance imaging (fMRI) technology was used to study changes to the resting state blood flow in the brains of patients with knee osteoarthritis (KOA) before and after treatment with moxibustion at the acupoint of the left Dubi (ST 35) and to probe the cerebral mechanism underlying the effect of moxibustion. The resting state brain function of 30 patients with left KOA was scanned with fMRI before and after treatment with moxibustion. The analytic methods of fractional amplitude of low frequency fluctuation (fALFF) and regional homogeneity (ReHo) were used to observe changes in resting state brain function. The fALFF values of the right cerebrum, extra-nucleus, left cerebellum, left cerebrum and white matter of patients after moxibustion treatment were higher than before treatment, and the fALFF values of the precentral gyrus, frontal lobe and occipital lobe were lower than before treatment (P < 0.05, K > or = 85). The ReHo values of the thalamus, extra-nucleus and parietal lobe of patients were much higher than those before moxibustion treatment, and the ReHo values of the right cerebrum, left cerebrum and frontal lobe were lower than before treatment (P < 0.05, K > or = 85). The influence of moxibustion on obvious changes in brain regions basically conforms to the way that pain and warmth is transmitted in the body, and the activation of sensitive systems in the body may be objective evidence of channel transmission. The regulation of brain function by moxibustion is not in a single brain region but rather in a network of many brain regions.
Sylvia, Louisa G.; Rabideau, Dustin J.; Nierenberg, Andrew A.; Bowden, Charles L.; Friedman, Edward S.; Iosifescu, Dan V.; Thase, Michael E.; Ketter, Terence; Greiter, Elizabeth A.; Calabrese, Joseph R.; Leon, Andrew C.; Ostacher, Michael J.; Reilly-Harrington, Noreen
2014-01-01
Objectives The aims of this study were to evaluate correlates and predictors of life functioning and quality of life in bipolar disorder during a comparative effectiveness trial of moderate doses of lithium. Methods In the Lithium treatment moderate-dose use study (LiTMUS), 283 symptomatic outpatients with bipolar disorder type I or II were randomized to receive lithium plus ”optimal personalized treatment (OPT), or OPT alone. Participants were assessed using structured diagnostic interviews, clinician-rated blinded assessments, and questionnaires. We employ linear mixed effects models to test the effect of treatment overall and adjunct lithium specifically on quality of life or functioning. Similar models are used to examine the association of baseline demographics and clinical features with quality of life and life functioning. Results Quality of life and impaired functioning at baseline were associated with lower income, higher depressive severity, and more psychiatric comorbid conditions. Over six months, patients in both treatment groups improved in quality of life and life functioning (p-values < 0.0001); without a statistically significant difference between the two treatment groups (p-values > 0.05). Within the lithium group, improvement in quality of life and functioning were not associated with concurrent lithium levels at week 12 or week 24 (p-values > 0.05). Lower baseline depressive severity and younger age of onset predicted less improvement in functioning over six months. Conclusions Optimized care for bipolar disorder improves overall quality of life and life functioning, with no additional benefit from adjunct moderate doses of lithium. Illness burden and psychosocial stressors were associated with worse quality of life and lower functioning in individuals with bipolar disorder. PMID:25194782
Hopfe, Maren; Stucki, Gerold; Marshall, Ric; Twomey, Conal D; Üstün, T Bedirhan; Prodinger, Birgit
2016-02-03
Contemporary casemix systems for health services need to ensure that payment rates adequately account for actual resource consumption based on patients' needs for services. It has been argued that functioning information, as one important determinant of health service provision and resource use, should be taken into account when developing casemix systems. However, there has to date been little systematic collation of the evidence on the extent to which the addition of functioning information into existing casemix systems adds value to those systems with regard to the predictive power and resource variation explained by the groupings of these systems. Thus, the objective of this research was to examine the value of adding functioning information into casemix systems with respect to the prediction of resource use as measured by costs and length of stay. A systematic literature review was performed. Peer-reviewed studies, published before May 2014 were retrieved from CINAHL, EconLit, Embase, JSTOR, PubMed and Sociological Abstracts using keywords related to functioning ('Functioning', 'Functional status', 'Function*, 'ICF', 'International Classification of Functioning, Disability and Health', 'Activities of Daily Living' or 'ADL') and casemix systems ('Casemix', 'case mix', 'Diagnosis Related Groups', 'Function Related Groups', 'Resource Utilization Groups' or 'AN-SNAP'). In addition, a hand search of reference lists of included articles was conducted. Information about study aims, design, country, setting, methods, outcome variables, study results, and information regarding the authors' discussion of results, study limitations and implications was extracted. Ten included studies provided evidence demonstrating that adding functioning information into casemix systems improves predictive ability and fosters homogeneity in casemix groups with regard to costs and length of stay. Collection and integration of functioning information varied across studies. Results suggest that, in particular, DRG casemix systems can be improved in predicting resource use and capturing outcomes for frail elderly or severely functioning-impaired patients. Further exploration of the value of adding functioning information into casemix systems is one promising approach to improve casemix systems ability to adequately capture the differences in patient's needs for services and to better predict resource use.
Ghosh, Sayan; Das, Swagatam; Vasilakos, Athanasios V; Suresh, Kaushik
2012-02-01
Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.
Flexibility Now, Consistency Later: Psychological Distance and Construal Shape Evaluative Responding
Ledgerwood, Alison; Trope, Yaacov; Chaiken, Shelly
2011-01-01
Researchers have long been interested in understanding the conditions under which evaluations will be more or less consistent or context-dependent. The current research explores this issue by asking when stability or flexibility in evaluative responding would be most useful. Integrating construal level theory with research suggesting that variability in the mental representation of an attitude object can produce fluctuations in evaluative responding, we propose a functional relationship between distance and evaluative flexibility. Because individuals construe psychologically proximal objects more concretely, evaluations of proximal objects will tend to incorporate unique information from the current social context, promoting context-specific responses. Conversely, because more distal objects are construed more abstractly, evaluations of distal objects will be less context-dependent. Consistent with this reasoning, the results of 4 studies suggest that when individuals mentally construe an attitude object concretely, either because it is psychologically close or because they have been led to adopt a concrete mindset, their evaluations flexibly incorporate the views of an incidental stranger. However, when individuals think about the same issue more abstractly, their evaluations are less susceptible to incidental social influence and instead reflect their previously reported ideological values. These findings suggest that there are ways of thinking that will tend to produce more or less variability in mental representation across contexts, which in turn shapes evaluative consistency. Connections to shared reality, conformity, and attitude function are discussed. PMID:20565184
Terrestrial cross-calibrated assimilation of various datasources
NASA Astrophysics Data System (ADS)
Groß, André; Müller, Richard; Schömer, Elmar; Trentmann, Jörg
2014-05-01
We introduce a novel software tool, ANACLIM, for the efficient assimilation of multiple two-dimensional data sets using a variational approach. We consider a single objective function in two spatial coordinates with higher derivatives. This function measures the deviation of the input data from the target data set. By using the Euler-Lagrange formalism the minimization of this objective function can be transformed into a sparse system of linear equations, which can be efficiently solved by a conjugate gradient solver on a desktop workstation. The objective function allows for a series of physically-motivated constraints. The user can control the relative global weights, as well as the individual weight of each constraint on a per-grid-point level. The different constraints are realized as separate terms of the objective function: One similarity term for each input data set and two additional smoothness terms, penalizing high gradient and curvature values. ANACLIM is designed to combine similarity and smoothness operators easily and to choose different solvers. We performed a series of benchmarks to calibrate and verify our solution. We use, for example, terrestrial stations of BSRN and GEBA for the solar incoming flux and AERONET stations for aerosol optical depth. First results show that the combination of these data sources gain a significant benefit against the input datasets with our approach. ANACLIM also includes a region growing algorithm for the assimilation of ground based data. The region growing algorithm computes the maximum area around a station that represents the station data. The regions are grown under several constraints like the homogeneity of the area. The resulting dataset is then used within the assimilation process. Verification is performed by cross-validation. The method and validation results will be presented and discussed.
Impact of production strategies and animal performance on economic values of dairy sheep traits.
Krupová, Z; Wolfová, M; Krupa, E; Oravcová, M; Daňo, J; Huba, J; Polák, P
2012-03-01
The objective of this study was to carry out a sensitivity analysis on the impact of various production strategies and performance levels on the relative economic values (REVs) of traits in dairy sheep. A bio-economic model implemented in the program package ECOWEIGHT was used to simulate the profit function for a semi-extensive production system with the Slovak multi-purpose breed Improved Valachian and to calculate the REV of 14 production and functional traits. The following production strategies were analysed: differing proportions of milk processed to cheese, customary weaning and early weaning of lambs with immediate sale or sale after artificial rearing, seasonal lambing in winter and aseasonal lambing in autumn. Results of the sensitivity analysis are presented in detail for the four economically most important traits: 150 days milk yield, conception rate of ewes, litter size and ewe productive lifetime. Impacts of the differences in the mean value of each of these four traits on REVs of all other traits were also examined. Simulated changes in the production circumstances had a higher impact on the REV for milk yield than on REVs of the other traits investigated. The proportion of milk processed to cheese, weaning management strategy for lambs and level of milk yield were the main factors influencing the REV of milk yield. The REVs for conception rate of ewes were highly sensitive to the current mean level of the trait. The REV of ewe productive lifetime was most sensitive to variation in ewe conception rate, and the REV of litter size was most affected by weaning strategy for lambs. On the basis of the results of sensitivity analyses, it is recommended that economic values of traits for the overall breeding objective for dairy sheep be calculated as the weighted average of the economic values obtained for the most common production strategies of Slovak dairy sheep farms and that economic values be adjusted after substantial changes in performance levels of the traits.
Online Feature Transformation Learning for Cross-Domain Object Category Recognition.
Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold
2017-06-09
In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.
Tomographic imaging using poissonian detector data
Aspelmeier, Timo; Ebel, Gernot; Hoeschen, Christoph
2013-10-15
An image reconstruction method for reconstructing a tomographic image (f.sub.j) of a region of investigation within an object (1), comprises the steps of providing detector data (y.sub.i) comprising Poisson random values measured at an i-th of a plurality of different positions, e.g. i=(k,l) with pixel index k on a detector device and angular index l referring to both the angular position (.alpha..sub.l) and the rotation radius (r.sub.l) of the detector device (10) relative to the object (1), providing a predetermined system matrix A.sub.ij assigning a j-th voxel of the object (1) to the i-th detector data (y.sub.i), and reconstructing the tomographic image (f.sub.j) based on the detector data (y.sub.i), said reconstructing step including a procedure of minimizing a functional F(f) depending on the detector data (y.sub.i) and the system matrix A.sub.ij and additionally including a sparse or compressive representation of the object (1) in an orthobasis T, wherein the tomographic image (f.sub.j) represents the global minimum of the functional F(f). Furthermore, an imaging method and an imaging device using the image reconstruction method are described.
Coding of visual object features and feature conjunctions in the human brain.
Martinovic, Jasna; Gruber, Thomas; Müller, Matthias M
2008-01-01
Object recognition is achieved through neural mechanisms reliant on the activity of distributed coordinated neural assemblies. In the initial steps of this process, an object's features are thought to be coded very rapidly in distinct neural assemblies. These features play different functional roles in the recognition process--while colour facilitates recognition, additional contours and edges delay it. Here, we selectively varied the amount and role of object features in an entry-level categorization paradigm and related them to the electrical activity of the human brain. We found that early synchronizations (approx. 100 ms) increased quantitatively when more image features had to be coded, without reflecting their qualitative contribution to the recognition process. Later activity (approx. 200-400 ms) was modulated by the representational role of object features. These findings demonstrate that although early synchronizations may be sufficient for relatively crude discrimination of objects in visual scenes, they cannot support entry-level categorization. This was subserved by later processes of object model selection, which utilized the representational value of object features such as colour or edges to select the appropriate model and achieve identification.
Estimation of the discharges of the multiple water level stations by multi-objective optimization
NASA Astrophysics Data System (ADS)
Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Yanami, Hitoshi; Anai, Hirokazu; Iwami, Yoichi
2016-04-01
This presentation shows two aspects of the parameter identification to estimate the discharges of the multiple water level stations by multi-objective optimization. One is how to adjust the parameters to estimate the discharges accurately. The other is which optimization algorithms are suitable for the parameter identification. Regarding the previous studies, there is a study that minimizes the weighted error of the discharges of the multiple water level stations by single-objective optimization. On the other hand, there are some studies that minimize the multiple error assessment functions of the discharge of a single water level station by multi-objective optimization. This presentation features to simultaneously minimize the errors of the discharges of the multiple water level stations by multi-objective optimization. Abe River basin in Japan is targeted. The basin area is 567.0km2. There are thirteen rainfall stations and three water level stations. Nine flood events are investigated. They occurred from 2005 to 2012 and the maximum discharges exceed 1,000m3/s. The discharges are calculated with PWRI distributed hydrological model. The basin is partitioned into the meshes of 500m x 500m. Two-layer tanks are placed on each mesh. Fourteen parameters are adjusted to estimate the discharges accurately. Twelve of them are the hydrological parameters and two of them are the parameters of the initial water levels of the tanks. Three objective functions are the mean squared errors between the observed and calculated discharges at the water level stations. Latin Hypercube sampling is one of the uniformly sampling algorithms. The discharges are calculated with respect to the parameter values sampled by a simplified version of Latin Hypercube sampling. The observed discharge is surrounded by the calculated discharges. It suggests that it might be possible to estimate the discharge accurately by adjusting the parameters. In a sense, it is true that the discharge of a water level station can be accurately estimated by setting the parameter values optimized to the responding water level station. However, there are some cases that the calculated discharge by setting the parameter values optimized to one water level station does not meet the observed discharge at another water level station. It is important to estimate the discharges of all the water level stations in some degree of accuracy. It turns out to be possible to select the parameter values from the pareto optimal solutions by the condition that all the normalized errors by the minimum error of the responding water level station are under 3. The optimization performance of five implementations of the algorithms and a simplified version of Latin Hypercube sampling are compared. Five implementations are NSGA2 and PAES of an optimization software inspyred and MCO_NSGA2R, MOPSOCD and NSGA2R_NSGA2R of a statistical software R. NSGA2, PAES and MOPSOCD are the optimization algorithms of a genetic algorithm, an evolution strategy and a particle swarm optimization respectively. The number of the evaluations of the objective functions is 10,000. Two implementations of NSGA2 of R outperform the others. They are promising to be suitable for the parameter identification of PWRI distributed hydrological model.
Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali
2017-12-01
Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.
NASA Astrophysics Data System (ADS)
Tomas, A.; Menendez, M.; Mendez, F. J.; Coco, G.; Losada, I. J.
2012-04-01
In the last decades, freak or rogue waves have become an important topic in engineering and science. Forecasting the occurrence probability of freak waves is a challenge for oceanographers, engineers, physicists and statisticians. There are several mechanisms responsible for the formation of freak waves, and different theoretical formulations (primarily based on numerical models with simplifying assumption) have been proposed to predict the occurrence probability of freak wave in a sea state as a function of N (number of individual waves) and kurtosis (k). On the other hand, different attempts to parameterize k as a function of spectral parameters such as the Benjamin-Feir Index (BFI) and the directional spreading (Mori et al., 2011) have been proposed. The objective of this work is twofold: (1) develop a statistical model to describe the uncertainty of maxima individual wave height, Hmax, considering N and k as covariates; (2) obtain a predictive formulation to estimate k as a function of aggregated sea state spectral parameters. For both purposes, we use free surface measurements (more than 300,000 20-minutes sea states) from the Spanish deep water buoy network (Puertos del Estado, Spanish Ministry of Public Works). Non-stationary extreme value models are nowadays widely used to analyze the time-dependent or directional-dependent behavior of extreme values of geophysical variables such as significant wave height (Izaguirre et al., 2010). In this work, a Generalized Extreme Value (GEV) statistical model for the dimensionless maximum wave height (x=Hmax/Hs) in every sea state is used to assess the probability of freak waves. We allow the location, scale and shape parameters of the GEV distribution to vary as a function of k and N. The kurtosis-dependency is parameterized using third-order polynomials and the model is fitted using standard log-likelihood theory, obtaining a very good behavior to predict the occurrence probability of freak waves (x>2). Regarding the second objective of this work, we apply different algorithms using three spectral parameters (wave steepness, directional dispersion, frequential dispersion) as predictors, to estimate the probability density function of the kurtosis for a given sea state. ACKNOWLEDGMENTS The authors thank to Puertos del Estado (Spanish Ministry of Public Works) for providing the free surface measurement database.
Making the most of missing values : object clustering with partial data in astronomy
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Laidler, Victoria G.
2004-01-01
We demonstrate a clustering analysis algorithm, KSC, that a) uses all observed values and b) does not discard the partially observed objects. KSC uses soft constraints defined by the fully observed objects to assist in the grouping of objects with missing values. We present an analysis of objects taken from the Sloan Digital Sky Survey to demonstrate how imputing the values can be misleading and why the KSC approach can produce more appropriate results.
NASA Astrophysics Data System (ADS)
Estevez-Delgado, Gabino; Estevez-Delgado, Joaquin
2018-05-01
An analysis and construction is presented for a stellar model characterized by two parameters (w, n) associated with the compactness ratio and anisotropy, respectively. The reliability range for the parameter w ≤ 1.97981225149 corresponds with a compactness ratio u ≤ 0.2644959374, the density and pressures are positive, regular and monotonic decrescent functions, the radial and tangential speed of sound are lower than the light speed, moreover, than the plausible stability. The behavior of the speeds of sound are determinate for the anisotropy parameter n, admitting a subinterval where the speeds are monotonic crescent functions and other where we have monotonic decrescent functions for the same speeds, both cases describing a compact object that is also potentially stable. In the bigger value for the observational mass M = 2.05 M⊙ and radii R = 12.957 Km for the star PSR J0348+0432, the model indicates that the maximum central density ρc = 1.283820319 × 1018 Kg/m3 corresponds to the maximum value of the anisotropy parameter and the radial and tangential speed of the sound are monotonic decrescent functions.
Optimizing Functional Network Representation of Multivariate Time Series
NASA Astrophysics Data System (ADS)
Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco Del; Menasalvas, Ernestina; Boccaletti, Stefano
2012-09-01
By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks.
Optimizing Functional Network Representation of Multivariate Time Series
Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco del; Menasalvas, Ernestina; Boccaletti, Stefano
2012-01-01
By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks. PMID:22953051
Hybrid Optimization Parallel Search PACKage
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-11-10
HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, amore » useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less
Craft in America: A Journey to the Artists, Origins and Work of American Craft
ERIC Educational Resources Information Center
SchoolArts: The Art Education Magazine for Teachers, 2007
2007-01-01
Quilting, the tradition of stitching together layers of fabric and padding, probably began as a way to provide protection in clothing, but one most often associates quilts with the warmth and comfort of bedding. As so often happens with objects created for a particular purpose, quilts have come to be valued, not only for their function, but also…
NASA Astrophysics Data System (ADS)
Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco
2017-04-01
Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This process is repeated until a threshold in the objective function is met or insufficient changes are produced in successive iterations.
Tractable Pareto Optimization of Temporal Preferences
NASA Technical Reports Server (NTRS)
Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent
2003-01-01
This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.
The generalized quadratic knapsack problem. A neuronal network approach.
Talaván, Pedro M; Yáñez, Javier
2006-05-01
The solution of an optimization problem through the continuous Hopfield network (CHN) is based on some energy or Lyapunov function, which decreases as the system evolves until a local minimum value is attained. A new energy function is proposed in this paper so that any 0-1 linear constrains programming with quadratic objective function can be solved. This problem, denoted as the generalized quadratic knapsack problem (GQKP), includes as particular cases well-known problems such as the traveling salesman problem (TSP) and the quadratic assignment problem (QAP). This new energy function generalizes those proposed by other authors. Through this energy function, any GQKP can be solved with an appropriate parameter setting procedure, which is detailed in this paper. As a particular case, and in order to test this generalized energy function, some computational experiments solving the traveling salesman problem are also included.
An effective and comprehensive model for optimal rehabilitation of separate sanitary sewer systems.
Diogo, António Freire; Barros, Luís Tiago; Santos, Joana; Temido, Jorge Santos
2018-01-15
In the field of rehabilitation of separate sanitary sewer systems, a large number of technical, environmental, and economic aspects are often relevant in the decision-making process, which may be modelled as a multi-objective optimization problem. Examples are those related with the operation and assessment of networks, optimization of structural, hydraulic, sanitary, and environmental performance, rehabilitation programmes, and execution works. In particular, the cost of investment, operation and maintenance needed to reduce or eliminate Infiltration from the underground water table and Inflows of storm water surface runoff (I/I) using rehabilitation techniques or related methods can be significantly lower than the cost of transporting and treating these flows throughout the lifespan of the systems or period studied. This paper presents a comprehensive I/I cost-benefit approach for rehabilitation that explicitly considers all elements of the systems and shows how the approximation is incorporated as an objective function in a general evolutionary multi-objective optimization model. It takes into account network performance and wastewater treatment costs, average values of several input variables, and rates that can reflect the adoption of different predictable or limiting scenarios. The approach can be used as a practical and fast tool to support decision-making in sewer network rehabilitation in any phase of a project. The fundamental aspects, modelling, implementation details and preliminary results of a two-objective optimization rehabilitation model using a genetic algorithm, with a second objective function related to the structural condition of the network and the service failure risk, are presented. The basic approach is applied to three real world cases studies of sanitary sewerage systems in Coimbra and the results show the simplicity, suitability, effectiveness, and usefulness of the approximation implemented and of the objective function proposed. Copyright © 2017 Elsevier B.V. All rights reserved.
Ghalyan, Najah F; Miller, David J; Ray, Asok
2018-06-12
Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.
Mulder, Christian; Maas, Rob
2017-11-28
Sustainable use of our soils is a key goal for environmental protection. As many ecosystem services are supported belowground at different trophic levels by nematodes, soil nematodes are expected to provide objective metrics for biological quality to integrate physical and chemical soil variables. Trait measurements of body mass carried out at the individual level can in this way be correlated with environmental properties that influence the performance of soil biota. Soil samples were collected across 200 sites (4 soil types and 5 land-use types resulting in 9 combinations) during a long-term monitoring programme in the Netherlands and the functional diversity of nematode communities was investigated. Using three commonly used functional diversity indices applicable to single traits (Divergence, Evenness and Richness), a unified index of overall body-mass distribution is proposed to better illustrate the application of functional metrics as a descriptor of land use. Effects of land use and soil chemistry on the functional diversity of nematodes were demonstrated and a combination of environmental factors accounts for the low functional value of Scots Pine forest soils in comparison to the high functional value of heathland soils, whereas human factors account for the low functional and chemical values of arable fields. These findings show an unexpected high functional vulnerability of nematodes inhabiting clay-rich soils in comparison to sandy soils and support the notion that soil C:N ratio is a major driver of biodiversity. The higher the C:N ratio, the higher the overall diversity, as soil nematodes cope better with nutrient-poor agroecosystems under less intense fertilization. A trait-based way focusing on size distribution of nematodes is proposed to maintain environmental health by monitoring the overall diversity in soil biota, keeping agriculture and forestry sustainable.
Pulmonary function studies in young healthy Malaysians of Kelantan, Malaysia
Bandyopadhyay, Amit
2011-01-01
Background & objectives: Pulmonary function tests have been evolved as clinical tools in diagnosis, management and follow up of respiratory diseases as it provides objective information about the status of an individual's respiratory system. The present study was aimed to evaluate pulmonary function among the male and female young Kelantanese Malaysians of Kota Bharu, Malaysia, and to compare the data with other populations. Methods: A total of 128 (64 males, 64 females) non-smoking healthy young subjects were randomly sampled for the study from the Kelantanese students’ population of the University Sains Malaysia, Kota Bharu Campus, Kelantan, Malaysia. The study population (20-25 yr age group) had similar socio-economic background. Each subject filled up the ATS (1978) questionnaire to record their personal demographic data, health status and consent to participate in the study. Subjects with any history of pulmonary diseases were excluded from the study. Results: The pulmonary function measurements exhibited significantly higher values among males than the females. FEV1% did not show any significant inter-group variation probably because the parameter expresses FEV1 as a percentage of FVC. FVC and FEV1 exhibited significant correlations with body height and body mass among males whereas in the females exhibited significant correlation with body mass, body weight and also with age. FEV1% exhibited significant correlation with body height and body mass among males and with body height in females. FEF25-75% did not show any significant correlation except with body height among females. However, PEFR exhibited significant positive correlation with all the physical parameters except with age among the females. On the basis of the existence of significant correlation between different physical parameters and pulmonary function variables, simple and multiple regression norms have been computed. Interpretation & conclusions: From the present investigation it can be concluded that Kelantanese Malaysian youths have normal range of pulmonary function in both the sexes and the computed regression norms may be used to predict the pulmonary function values in the studied population. PMID:22199104
Design of Distortion-Invariant Optical ID Tags for Remote Identification and Verification of Objects
NASA Astrophysics Data System (ADS)
Pérez-Cabré, Elisabet; Millán, María Sagrario; Javidi, Bahram
Optical identification (ID) tags [1] have a promising future in a number of applications such as the surveillance of vehicles in transportation, control of restricted areas for homeland security, item tracking on conveyor belts or other industrial environment, etc. More specifically, passive optical ID tag [1] was introduced as an optical code containing a signature (that is, a characteristic image or other relevant information of the object), which permits its real-time remote detection and identification. Since their introduction in the literature [1], some contributions have been proposed to increase their usefulness and robustness. To increase security and avoid counterfeiting, the signature was introduced in the optical code as an encrypted function [2-5] following the double-phase encryption technique [6]. Moreover, the design of the optical ID tag was done in such a way that tolerance to variations in scale and rotation was achieved [2-5]. To do that, the encrypted information was multiplexed and distributed in the optical code following an appropriate topology. Further studies were carried out to analyze the influence of different sources of noise. In some proposals [5, 7], the designed ID tag consists of two optical codes where the complex-valued encrypted signature was separately introduced in two real-valued functions according to its magnitude and phase distributions. This solution was introduced to overcome some difficulties in the readout of complex values in outdoors environments. Recently, the fully phase encryption technique [8] has been proposed to increase noise robustness of the authentication system.
Sleep enhances a spatially mediated generalization of learned values
Tolat, Anisha; Spiers, Hugo J.
2015-01-01
Sleep is thought to play an important role in memory consolidation. Here we tested whether sleep alters the subjective value associated with objects located in spatial clusters that were navigated to in a large-scale virtual town. We found that sleep enhances a generalization of the value of high-value objects to the value of locally clustered objects, resulting in an impaired memory for the value of high-valued objects. Our results are consistent with (a) spatial context helping to bind items together in long-term memory and serve as a basis for generalizing across memories and (b) sleep mediating memory effects on salient/reward-related items. PMID:26373834
Design and analysis of all-dielectric subwavelength focusing flat lens
NASA Astrophysics Data System (ADS)
Turduev, M.; Bor, E.; Kurt, H.
2017-09-01
In this letter, we numerically designed and experimentally demonstrated a compact photonic structure for the subwavelength focusing of light using all-dielectric absorption-free and nonmagnetic scattering objects distributed in an air medium. In order to design the subwavelength focusing flat lens, an evolutionary algorithm is combined with the finite-difference time-domain method for determining the locations of cylindrical scatterers. During the multi-objective optimization process, a specific objective function is defined to reduce the full width at half maximum (FWHM) and diminish side lobe level (SLL) values of light at the focal point. The time-domain response of the optimized flat lens exhibits subwavelength light focusing with an FWHM value of 0.19λ and an SLL value of 0.23, where λ denotes the operating wavelength of light. Experimental analysis of the proposed flat lens is conducted in a microwave regime and findings exactly verify the numerical results with an FWHM of 0.192λ and an SLL value of 0.311 at the operating frequency of 5.42 GHz. Moreover, the designed flat lens provides a broadband subwavelength focusing effect with a 9% bandwidth covering frequency range of 5.10 GHz-5.58 GHz, where corresponding FWHM values remain under 0.21λ. Also, it is important to note that the designed flat lens structure performs a line focusing effect. Possible applications of the designed structure in telecom wavelengths are speculated upon for future perspectives. Namely, the designed structure can perform well in photonic integrated circuits for different fields of applications such as high efficiency light coupling, imaging and optical microscopy, with its compact size and ability for strong focusing.
Activity inhibition on municipal activated sludge by single-walled carbon nanotubes
NASA Astrophysics Data System (ADS)
Parise, Alex; Thakor, Harshrajsinh; Zhang, Xiaoqi
2014-01-01
The objective of this study was to evaluate the respiratory activity inhibition of activated sludge used in a typical wastewater treatment plant by single-walled carbon nanotubes (SWCNTs) with different length and functionality. Four types of SWCNTs were evaluated: short, functionalized short, long, and functionalized long. Based on the effective concentration (EC50) values obtained, we determined that functionalized SWCNTs resulted in a higher microbial respiratory inhibition than non-functionalized nanotubes, and long SWCNTs gave a higher microbial respiratory inhibition than their short counterparts. Among the four types of SWCNTs studied, functionalized long exhibited the highest respiration inhibition. Scanning electron microscopy imaging indicates that the long SWCNTs dispersed more favorably after sonication than the short variety. The findings demonstrated that the toxicity of CNTs (exhibited by respiratory inhibition) is related to their physical properties; the length and functionality of SWCNTs affected the toxicity of SWCNTs in a mixed-cultured biologic system.
Finding Specification Pages from the Web
NASA Astrophysics Data System (ADS)
Yoshinaga, Naoki; Torisawa, Kentaro
This paper presents a method of finding a specification page on the Web for a given object (e.g., ``Ch. d'Yquem'') and its class label (e.g., ``wine''). A specification page for an object is a Web page which gives concise attribute-value information about the object (e.g., ``county''-``Sauternes'') in well formatted structures. A simple unsupervised method using layout and symbolic decoration cues was applied to a large number of the Web pages to acquire candidate attributes for each class (e.g., ``county'' for a class ``wine''). We then filter out irrelevant words from the putative attributes through an author-aware scoring function that we called site frequency. We used the acquired attributes to select a representative specification page for a given object from the Web pages retrieved by a normal search engine. Experimental results revealed that our system greatly outperformed the normal search engine in terms of this specification retrieval.
Houts, A C
2001-09-01
Wakefield's claims to have identified and objective scientific component of mental disorders in the concept of dysfunction are examined in light of previous attempts to state a value free concept of mental disorders. The harmful dysfunction concept of dysfunction is not value free because it confounds cause and purpose in a specious use of evolutionary theory and because evolutionary theory cannot reliably supply standards for when a function is broken. Harmful dysfunction analysis collapses into a value-laden concept of mental disorders and serves the untoward goal of promoting the status quo in the modern DSMs. If the concept of dysfunction were taken seriously and rigorously defined, then it might be possible to separate what is medical from what is not in the domain of mental disorders.
Enhancing High Value Care in Gastroenterology Practice.
Camilleri, Michael; Katzka, David A
2016-10-01
The objective of this review is to identify common areas in gastroenterology practice where studies performed provide an opportunity for enhancing value or lowering costs. We provide examples of topics in gastroenterology where clinicians could enhance value by either using less invasive testing, choosing a single best test, or by using patient symptoms to guide additional testing. The topics selected for review are selected in esophageal, pancreatic, and colorectal cancer; functional gastrointestinal diseases (irritable bowel syndrome, bacterial overgrowth, constipation); immune-mediated gastrointestinal diseases; and pancreaticobiliary pathology. We propose guidance to alter practice based on current evidence. These studies support the need to review current practice and to continue performing research to further validate the proposed guidance to enhance value of care in gastroenterology and hepatology. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.
High level functions for the intuitive use of an assistive robot.
Lebec, Olivier; Ben Ghezala, Mohamed Walid; Leynart, Violaine; Laffont, Isabelle; Fattal, Charles; Devilliers, Laurence; Chastagnol, Clement; Martin, Jean-Claude; Mezouar, Youcef; Korrapatti, Hermanth; Dupourqué, Vincent; Leroux, Christophe
2013-06-01
This document presents the research project ARMEN (Assistive Robotics to Maintain Elderly People in a Natural environment), aimed at the development of a user friendly robot with advanced functions for assistance to elderly or disabled persons at home. Focus is given to the robot SAM (Smart Autonomous Majordomo) and its new features of navigation, manipulation, object recognition, and knowledge representation developed for the intuitive supervision of the robot. The results of the technical evaluations show the value and potential of these functions for practical applications. The paper also documents the details of the clinical evaluations carried out with elderly and disabled persons in a therapeutic setting to validate the project.
Optimal path planning for a mobile robot using cuckoo search algorithm
NASA Astrophysics Data System (ADS)
Mohanty, Prases K.; Parhi, Dayal R.
2016-03-01
The shortest/optimal path planning is essential for efficient operation of autonomous vehicles. In this article, a new nature-inspired meta-heuristic algorithm has been applied for mobile robot path planning in an unknown or partially known environment populated by a variety of static obstacles. This meta-heuristic algorithm is based on the levy flight behaviour and brood parasitic behaviour of cuckoos. A new objective function has been formulated between the robots and the target and obstacles, which satisfied the conditions of obstacle avoidance and target-seeking behaviour of robots present in the terrain. Depending upon the objective function value of each nest (cuckoo) in the swarm, the robot avoids obstacles and proceeds towards the target. The smooth optimal trajectory is framed with this algorithm when the robot reaches its goal. Some simulation and experimental results are presented at the end of the paper to show the effectiveness of the proposed navigational controller.
NASA Astrophysics Data System (ADS)
LIU, Yiping; XU, Qing; ZhANG, Heng; LV, Liang; LU, Wanjie; WANG, Dandi
2016-11-01
The purpose of this paper is to solve the problems of the traditional single system for interpretation and draughting such as inconsistent standards, single function, dependence on plug-ins, closed system and low integration level. On the basis of the comprehensive analysis of the target elements composition, map representation and similar system features, a 3D interpretation and draughting integrated service platform for multi-source, multi-scale and multi-resolution geospatial objects is established based on HTML5 and WebGL, which not only integrates object recognition, access, retrieval, three-dimensional display and test evaluation but also achieves collection, transfer, storage, refreshing and maintenance of data about Geospatial Objects and shows value in certain prospects and potential for growth.
NASA Astrophysics Data System (ADS)
Dong, Shidu; Yang, Xiaofan; He, Bo; Liu, Guojin
2006-11-01
Radiance coming from the interior of an uncooled infrared camera has a significant effect on the measured value of the temperature of the object. This paper presents a three-phase compensation scheme for coping with this effect. The first phase acquires the calibration data and forms the calibration function by least square fitting. Likewise, the second phase obtains the compensation data and builds the compensation function by fitting. With the aid of these functions, the third phase determines the temperature of the object in concern from any given ambient temperature. It is known that acquiring the compensation data of a camera is very time-consuming. For the purpose of getting the compensation data at a reasonable time cost, we propose a transplantable scheme. The idea of this scheme is to calculate the ratio between the central pixel’s responsivity of the child camera to the radiance from the interior and that of the mother camera, followed by determining the compensation data of the child camera using this ratio and the compensation data of the mother camera Experimental results show that either of the child camera and the mother camera can measure the temperature of the object with an error of no more than 2°C.
Fast periodic stimulation (FPS): a highly effective approach in fMRI brain mapping.
Gao, Xiaoqing; Gentile, Francesco; Rossion, Bruno
2018-06-01
Defining the neural basis of perceptual categorization in a rapidly changing natural environment with low-temporal resolution methods such as functional magnetic resonance imaging (fMRI) is challenging. Here, we present a novel fast periodic stimulation (FPS)-fMRI approach to define face-selective brain regions with natural images. Human observers are presented with a dynamic stream of widely variable natural object images alternating at a fast rate (6 images/s). Every 9 s, a short burst of variable face images contrasting with object images in pairs induces an objective face-selective neural response at 0.111 Hz. A model-free Fourier analysis achieves a twofold increase in signal-to-noise ratio compared to a conventional block-design approach with identical stimuli and scanning duration, allowing to derive a comprehensive map of face-selective areas in the ventral occipito-temporal cortex, including the anterior temporal lobe (ATL), in all individual brains. Critically, periodicity of the desired category contrast and random variability among widely diverse images effectively eliminates the contribution of low-level visual cues, and lead to the highest values (80-90%) of test-retest reliability in the spatial activation map yet reported in imaging higher level visual functions. FPS-fMRI opens a new avenue for understanding brain function with low-temporal resolution methods.
New convergence results for the scaled gradient projection method
NASA Astrophysics Data System (ADS)
Bonettini, S.; Prato, M.
2015-09-01
The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) method, proposed by Bonettini et al in a recent paper for constrained smooth optimization. The main feature of SGP is the presence of a variable scaling matrix multiplying the gradient, which may change at each iteration. In the last few years, extensive numerical experimentation showed that SGP equipped with a suitable choice of the scaling matrix is a very effective tool for solving large scale variational problems arising in image and signal processing. In spite of the very reliable numerical results observed, only a weak convergence theorem is provided establishing that any limit point of the sequence generated by SGP is stationary. Here, under the only assumption that the objective function is convex and that a solution exists, we prove that the sequence generated by SGP converges to a minimum point, if the scaling matrices sequence satisfies a simple and implementable condition. Moreover, assuming that the gradient of the objective function is Lipschitz continuous, we are also able to prove the {O}(1/k) convergence rate with respect to the objective function values. Finally, we present the results of a numerical experience on some relevant image restoration problems, showing that the proposed scaling matrix selection rule performs well also from the computational point of view.
Liao, Xuan; Lin, Jia; Tian, Jing; Wen, BaiWei; Tan, QingQing; Lan, ChangJun
2018-06-01
To compare objective optical quality, ocular scattering and aberrations of eyes implanted with an aspheric monofocal intraocular lens (IOL) or an aspheric apodized diffractive multifocal IOL three months after surgery. Prospective consecutive nonrandomized comparative cohort study. A total of 80 eyes from 57 cataract patients were bilaterally or unilaterally implanted with monofocal (AcrySof IQ SN60WF) or multifocal (AcrySof IQ ReSTOR SN6AD1) IOLs. Respectively, 40 eyes of 27 patients were implanted with monofocal IOLs, and 40 eyes of 30 patients were implanted with multifocal IOLs. Ocular high-order aberration (HOA) values were obtained using Hartmann-Shack aberrometer; objective scatter index (OSI), modulation transfer function (MTF) cutoff, Strehl ratio (SR), and contrast visual acuity OV at 100%, 20%, and 9% were measured using Objective Quality Analysis System II (OQAS II). Ocular aberrations performed similar in both groups (p > 0.05). However, significantly higher values of OSI and lower values of MTF cutoff, SR and OV were found in the SN6AD1 group (p < 0.05). Both ocular scattering and wave-front aberrations play essential role in retinal image quality, which may be overestimated when only aberrations were taken into account. Combining the effect of ocular scattering with HOA will result in a more accurate assessment of the visual and optical quality.
Economic selection index development for Beefmaster cattle II: General-purpose breeding objective.
Ochsner, K P; MacNeil, M D; Lewis, R M; Spangler, M L
2017-05-01
An economic selection index was developed for Beefmaster cattle in a general-purpose production system in which bulls are mated to a combination of heifers and mature cows, with resulting progeny retained as replacements or sold at weaning. National average prices from 2010 to 2014 were used to establish income and expenses for the system. Genetic parameters were obtained from the literature. Economic values were estimated by simulating 100,000 animals and approximating the partial derivatives of the profit function by perturbing traits 1 at a time, by 1 unit, while holding the other traits constant at their respective means. Relative economic values for the objective traits calving difficultly direct (CDd), calving difficulty maternal (CDm), weaning weight direct (WWd), weaning weight maternal (WWm), mature cow weight (MW), and heifer pregnancy (HP) were -2.11, -1.53, 18.49, 11.28, -33.46, and 1.19, respectively. Consequently, under the scenario assumed herein, the greatest improvements in profitability could be made by decreasing maintenance energy costs associated with MW followed by improvements in weaning weight. The accuracy of the index lies between 0.218 (phenotypic-based index selection) and 0.428 (breeding values known without error). Implementation of this index would facilitate genetic improvement and increase profitability of Beefmaster cattle operations with a general-purpose breeding objective when replacement females are retained and with weaned calves as the sale end point.
Qi, N; Cui, Y; Liu, J C; Yu, M; Teng, G J
2017-10-24
Objective: To investigate the changes of resting brain function with time in patients with type 2 diabetes mellitus (T2DM) by using regional homogeneity (ReHo) with resting-state functional magnetic resonance imaging (rs-fMRI). Methods: Multidimensional cognitive function tests and rs-fMRI scans were performed in 21 T2DM patients and 12 healthy controls in 2012 and 2015 respectively.The differences in clinical variables and the ReHo values before and after were measured by paired sample t test, and the correlation between the change of ReHo value and the change of clinical variables was measured by Pearson correlation analysis based on voxel. Results: The delayed score (14±6) of the T2DM patients in 2015 was significantly lower than that in 2012 (18±6) ( t =-2.88, P =0.009); while the value of ReHo in the bilateral occipital lobe and right middle frontal gyrus was significantly lower than that in 2012 ( P <0.01, Alphasim correction). And the decreased ReHo value in the left occipital lobe was significantly correlated with the change of complex figure test (CFT) delay score and the trail making test-B (TMT-B)( r =0.52, -0.46, both P <0.05). No significant change in cognitive function tests in the healthy control group was found between the two years, ReHo value in right cuneus decreased significantly ( P <0.01, Alphasim correction), but it increased significantly in superior frontal gyrus ( P <0.01, Alphasim correction) in 2015.No significant correlation between the changes of the ReHo values in the right cuneus and right superior frontal gyrus and the changes of cognitive function scores was found in the healthy controls. Conclusions: The visual memory is significantly declined in T2DM patients within 3 years.The reduced neural activity areas in T2DM patients are in the bilateral occipitai lobes and the right middle frontal lobe. Decreased neural activity in the left occipital area is related to visual impairment, information processing speed and attention drops.
Kostuj, Tanja; Stief, Felix; Hartmann, Kirsten Anna; Schaper, Katharina; Arabmotlagh, Mohammad; Baums, Mike H; Meurer, Andrea; Krummenauer, Frank; Lieske, Sebastian
2018-04-05
After cross-cultural adaption for the German translation of the Ankle-Hindfoot Scale of the American Orthopaedic Foot and Ankle Society (AOFAS-AHS) and agreement analysis with the Foot Function Index (FFI-D), the following gait analysis study using the Oxford Foot Model (OFM) was carried out to show which of the two scores better correlates with objective gait dysfunction. Results of the AOFAS-AHS and FFI-D, as well as data from three-dimensional gait analysis were collected from 20 patients with mild to severe ankle and hindfoot pathologies.Kinematic and kinetic gait data were correlated with the results of the total AOFAS scale and FFI-D as well as the results of those items representing hindfoot function in the AOFAS-AHS assessment. With respect to the foot disorders in our patients (osteoarthritis and prearthritic conditions), we correlated the total range of motion (ROM) in the ankle and subtalar joints as identified by the OFM with values identified during clinical examination 'translated' into score values. Furthermore, reduced walking speed, reduced step length and reduced maximum ankle power generation during push-off were taken into account and correlated to gait abnormalities described in the scores. An analysis of correlations with CIs between the FFI-D and the AOFAS-AHS items and the gait parameters was performed by means of the Jonckheere-Terpstra test; furthermore, exploratory factor analysis was applied to identify common information structures and thereby redundancy in the FFI-D and the AOFAS-AHS items. Objective findings for hindfoot disorders, namely a reduced ROM, in the ankle and subtalar joints, respectively, as well as reduced ankle power generation during push-off, showed a better correlation with the AOFAS-AHS total score-as well as AOFAS-AHS items representing ROM in the ankle, subtalar joints and gait function-compared with the FFI-D score.Factor analysis, however, could not identify FFI-D items consistently related to these three indicator parameters (pain, disability and function) found in the AOFAS-AHS. Furthermore, factor analysis did not support stratification of the FFI-D into two subscales. The AOFAS-AHS showed a good agreement with objective gait parameters and is therefore better suited to evaluate disability and functional limitations of patients suffering from foot and ankle pathologies compared with the FFI-D. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russ, M; Nagesh, S Setlur; Ionita, C
2015-06-15
Purpose: To evaluate the task specific imaging performance of a new 25µm pixel pitch, 1000µm thick amorphous selenium direct detection system with CMOS readout for typical angiographic exposure parameters using the relative object detectability (ROD) metric. Methods: The ROD metric uses a simulated object function weighted at each spatial frequency by the detectors’ detective quantum efficiency (DQE), which is an intrinsic performance metric. For this study, the simulated objects were aluminum spheres of varying diameter (0.05–0.6mm). The weighted object function is then integrated over the full range of detectable frequencies inherent to each detector, and a ratio is taken ofmore » the resulting value for two detectors. The DQE for the 25µm detector was obtained from a simulation of a proposed a-Se detector using an exposure of 200µR for a 50keV x-ray beam. This a-Se detector was compared to two microangiographic fluoroscope (MAF) detectors [the MAF-CCD with pixel size of 35µm and Nyquist frequency of 14.2 cycles/mm and the MAF-CMOS with pixel size of 75µm and Nyquist frequency of 6.6 cycles/mm] and a standard flat-panel detector (FPD with pixel size of 194µm and Nyquist frequency of 2.5cycles/mm). Results: ROD calculations indicated vastly superior performance by the a-Se detector in imaging small aluminum spheres. For the 50µm diameter sphere, the ROD values for the a-Se detector compared to the MAF-CCD, the MAF-CMOS, and the FPD were 7.3, 9.3 and 58, respectively. Detector performance in the low frequency regime was dictated by each detector’s DQE(0) value. Conclusion: The a-Se with CMOS readout is unique and appears to have distinctive advantages of incomparable high resolution, low noise, no readout lag, and expandable design. The a-Se direct detection system will be a powerful imaging tool in angiography, with potential break-through applications in diagnosis and treatment of neuro-vascular disease. Supported by NIH Grant: 2R01EB002873 and an equipment grant from Toshiba Medical Systems Corporation.« less
A Dual Power Law Distribution for the Stellar Initial Mass Function
NASA Astrophysics Data System (ADS)
Hoffmann, Karl Heinz; Essex, Christopher; Basu, Shantanu; Prehl, Janett
2018-05-01
We introduce a new dual power law (DPL) probability distribution function for the mass distribution of stellar and substellar objects at birth, otherwise known as the initial mass function (IMF). The model contains both deterministic and stochastic elements, and provides a unified framework within which to view the formation of brown dwarfs and stars resulting from an accretion process that starts from extremely low mass seeds. It does not depend upon a top down scenario of collapsing (Jeans) masses or an initial lognormal or otherwise IMF-like distribution of seed masses. Like the modified lognormal power law (MLP) distribution, the DPL distribution has a power law at the high mass end, as a result of exponential growth of mass coupled with equally likely stopping of accretion at any time interval. Unlike the MLP, a power law decay also appears at the low mass end of the IMF. This feature is closely connected to the accretion stopping probability rising from an initially low value up to a high value. This might be associated with physical effects of ejections sometimes (i.e., rarely) stopping accretion at early times followed by outflow driven accretion stopping at later times, with the transition happening at a critical time (therefore mass). Comparing the DPL to empirical data, the critical mass is close to the substellar mass limit, suggesting that the onset of nuclear fusion plays an important role in the subsequent accretion history of a young stellar object.
Development of a coupled level set and immersed boundary method for predicting dam break flows
NASA Astrophysics Data System (ADS)
Yu, C. H.; Sheu, Tony W. H.
2017-12-01
Dam-break flow over an immersed stationary object is investigated using a coupled level set (LS)/immersed boundary (IB) method developed in Cartesian grids. This approach adopts an improved interface preserving level set method which includes three solution steps and the differential-based interpolation immersed boundary method to treat fluid-fluid and solid-fluid interfaces, respectively. In the first step of this level set method, the level set function ϕ is advected by a pure advection equation. The intermediate step is performed to obtain a new level set value through a new smoothed Heaviside function. In the final solution step, a mass correction term is added to the re-initialization equation to ensure the new level set is a distance function and to conserve the mass bounded by the interface. For accurately calculating the level set value, the four-point upwinding combined compact difference (UCCD) scheme with three-point boundary combined compact difference scheme is applied to approximate the first-order derivative term shown in the level set equation. For the immersed boundary method, application of the artificial momentum forcing term at points in cells consisting of both fluid and solid allows an imposition of velocity condition to account for the presence of solid object. The incompressible Navier-Stokes solutions are calculated using the projection method. Numerical results show that the coupled LS/IB method can not only predict interface accurately but also preserve the mass conservation excellently for the dam-break flow.
A dynamic code for economic object valuation in prefrontal cortex neurons
Tsutsui, Ken-Ichiro; Grabenhorst, Fabian; Kobayashi, Shunsuke; Schultz, Wolfram
2016-01-01
Neuronal reward valuations provide the physiological basis for economic behaviour. Yet, how such valuations are converted to economic decisions remains unclear. Here we show that the dorsolateral prefrontal cortex (DLPFC) implements a flexible value code based on object-specific valuations by single neurons. As monkeys perform a reward-based foraging task, individual DLPFC neurons signal the value of specific choice objects derived from recent experience. These neuronal object values satisfy principles of competitive choice mechanisms, track performance fluctuations and follow predictions of a classical behavioural model (Herrnstein’s matching law). Individual neurons dynamically encode both, the updating of object values from recently experienced rewards, and their subsequent conversion to object choices during decision-making. Decoding from unselected populations enables a read-out of motivational and decision variables not emphasized by individual neurons. These findings suggest a dynamic single-neuron and population value code in DLPFC that advances from reward experiences to economic object values and future choices. PMID:27618960
Techniques for assessing relative values for multiple objective management on private forests
Donald F. Dennis; Thomas H. Stevens; David B. Kittredge; Mark G. Rickenbach
2003-01-01
Decision models for assessing multiple objective management of private lands will require estimates of the relative values of various nonmarket outputs or objectives that have become increasingly important. In this study, conjoint techniques are used to assess the relative values and acceptable trade-offs (marginal rates of substitution) among various objectives...
Application of fuzzy theories to formulation of multi-objective design problems. [for helicopters
NASA Technical Reports Server (NTRS)
Dhingra, A. K.; Rao, S. S.; Miura, H.
1988-01-01
Much of the decision making in real world takes place in an environment in which the goals, the constraints, and the consequences of possible actions are not known precisely. In order to deal with imprecision quantitatively, the tools of fuzzy set theory can by used. This paper demonstrates the effectiveness of fuzzy theories in the formulation and solution of two types of helicopter design problems involving multiple objectives. The first problem deals with the determination of optimal flight parameters to accomplish a specified mission in the presence of three competing objectives. The second problem addresses the optimal design of the main rotor of a helicopter involving eight objective functions. A method of solving these multi-objective problems using nonlinear programming techniques is presented. Results obtained using fuzzy formulation are compared with those obtained using crisp optimization techniques. The outlined procedures are expected to be useful in situations where doubt arises about the exactness of permissible values, degree of credibility, and correctness of statements and judgements.
Computational process to study the wave propagation In a non-linear medium by quasi- linearization
NASA Astrophysics Data System (ADS)
Sharath Babu, K.; Venkata Brammam, J.; Baby Rani, CH
2018-03-01
Two objects having distinct velocities come into contact an impact can occur. The impact study i.e., in the displacement of the objects after the impact, the impact force is function of time‘t’ which is behaves similar to compression force. The impact tenure is very short so impulses must be generated subsequently high stresses are generated. In this work we are examined the wave propagation inside the object after collision and measured the object non-linear behavior in the one-dimensional case. Wave transmission is studied by means of material acoustic parameter value. The objective of this paper is to present a computational study of propagating pulsation and harmonic waves in nonlinear media using quasi-linearization and subsequently utilized the central difference scheme. This study gives focus on longitudinal, one- dimensional wave propagation. In the finite difference scheme Non-linear system is reduced to a linear system by applying quasi-linearization method. The computed results exhibit good agreement on par with the selected non-liner wave propagation.
Regional Homogeneity Changes in Nicotine Addicts by Resting-State fMRI.
Chen, Hongbo; Mo, Shaofeng
2017-01-01
To reveal the brain functional changes of nicotine addicts compared with those of non-smokers and explore the objective biomarker for nicotine dependence evaluation. A total of 14 smokers and 11 non-smoking controls were recruited for this study. Resting-state functional magnetic resonance imaging and regional homogeneity (ReHo) were applied in the neural activity analysis. Two-sample t-test was performed to examine the voxel-wise difference between the smokers and the controls. Correlation analysis between the ReHo values and the Fagerstrom Test for Nicotine Dependence (FTND) scores were performed to explore the biomarkers for the clinical characteristics of smokers. The ReHo values from the right superior frontal gyrus of the Brodmann's area (BA) 9 to the right middle frontal gyrus and the ReHo value from the left and right precuneus (BA 23) to the left and right middle cingulum gyrus were lower in the smokers than in the non-smokers. The ReHo value in the precuneus (BA 23) was significantly and positively correlated with the FTND score of smokers. The ReHo values in the right superior frontal gyrus and left precuneus can be used to separate the smokers from the non-smokers. In particular, the left precuneus is a potential neuroimaging biomarker for nicotine addicts.
Koo, Hyeon-Kyoung; Jin, Kwang Nam; Kim, Deog Kyeom; Chung, Hee Soon; Lee, Chang-Hoon
2016-01-01
Objectives Emphysema is one of the prognostic factors for rapid lung function decline in patients with COPD, but the impact of incidentally detected emphysema on population without spirometric abnormalities has not been evaluated. This study aimed to determine whether emphysema detected upon computed tomography (CT) screening would accelerate the rate of lung function decline and influence the possibility of future development of airflow limitation in a population without spirometric abnormalities. Materials and methods Subjects who participated in a routine screening for health checkup and follow-up pulmonary function tests for at least 3 years between 2004 and 2010 were retrospectively enrolled. The percentage of low-attenuation area below −950 Hounsfield units (%LAA−950) was calculated automatically. A calculated value of %LAA−950 that exceeded 10% was defined as emphysema. Adjusted annual lung function decline was analyzed using random-slope, random-intercept mixed linear regression models. Results A total of 628 healthy subjects within the normal range of spriometric values were included. Multivariable analysis showed that the emphysema group exhibited a faster decline in forced vital capacity (−33.9 versus −18.8 mL/year; P=0.02). Emphysema was not associated with the development of airflow limitation during follow-up. Conclusion Incidental emphysema quantified using CT scan was significantly associated with a more rapid decline in forced vital capacity in the population with normative spirometric values. However, an association between emphysema and future development of airflow limitation was not observed. PMID:26893550
Local Approximation and Hierarchical Methods for Stochastic Optimization
NASA Astrophysics Data System (ADS)
Cheng, Bolong
In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the PJM Interconnect and show that it outperforms the baseline approach used in the industry.
Distributed Constrained Optimization with Semicoordinate Transformations
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2006-01-01
Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.
Lower bound for LCD image quality
NASA Astrophysics Data System (ADS)
Olson, William P.; Balram, Nikhil
1996-03-01
The paper presents an objective lower bound for the discrimination of patterns and fine detail in images on a monochrome LCD. In applications such as medical imaging and military avionics the information of interest is often at the highest frequencies in the image. Since LCDs are sampled data systems, their output modulation is dependent on the phase between the input signal and the sampling points. This phase dependence becomes particularly significant at high spatial frequencies. In order to use an LCD for applications such as those mentioned above it is essential to have a lower (worst case) bound on the performance of the display. We address this problem by providing a mathematical model for the worst case output modulation of an LCD in response to a sine wave input. This function can be interpreted as a worst case modulation transfer function (MTF). The intersection of the worst case MTF with the contrast threshold function (CTF) of the human visual system defines the highest spatial frequency that will always be detectable. In addition to providing the worst case limiting resolution, this MTF is combined with the CTF to produce objective worst case image quality values using the modulation transfer function area (MTFA) metric.
Prefrontal mechanisms of behavioral flexibility, emotion regulation and value updating
Rudebeck, Peter H.; Saunders, Richard C.; Prescott, Anna T.; Chau, Lily S.; Murray, Elisabeth A.
2013-01-01
Two ideas have dominated the neuropsychology of the orbitofrontal cortex (OFC). One holds that OFC regulates emotion and enhances behavioral flexibility through inhibitory control. The other ascribes to OFC a role in updating valuations based on current motivational states. Neuroimaging, neurophysiological and clinical observations are consistent with either or both hypotheses. Although these hypotheses are compatible in principle, the present results support the latter view of OFC function and argue against the former. We show that excitotoxic, fibersparing lesions confined to OFC in monkeys do not alter either behavioral flexibility, as measured by object reversal learning, or emotion regulation, as assessed by snake fear. A follow-up experiment indicates that previous reports of a loss of inhibitory control resulted from damage to nearby fiber tracts and not from OFC dysfunction. Thus, OFC plays a more specialized role in reward-guided behavior and emotion than currently thought, a function that includes value updating. PMID:23792944
Prefrontal mechanisms of behavioral flexibility, emotion regulation and value updating.
Rudebeck, Peter H; Saunders, Richard C; Prescott, Anna T; Chau, Lily S; Murray, Elisabeth A
2013-08-01
Two ideas have dominated neuropsychology concerning the orbitofrontal cortex (OFC). One holds that OFC regulates emotion and enhances behavioral flexibility through inhibitory control. The other ascribes to OFC a role in updating valuations on the basis of current motivational states. Neuroimaging, neurophysiological and clinical observations are consistent with either or both hypotheses. Although these hypotheses are compatible in principle, we present results supporting the latter view of OFC function and arguing against the former. We found that excitotoxic, fiber-sparing lesions confined to OFC in monkeys did not alter either behavioral flexibility, as measured by object reversal learning, or emotion regulation, as assessed by fear of snakes. A follow-up experiment indicated that a previously reported loss of inhibitory control resulted from damage to nearby fiber tracts and not from OFC dysfunction. Thus, OFC has a more specialized role in reward-guided behavior and emotion than has been thought, a function that includes value updating.
Acute kidney injury after contrast-enhanced examination among elderly1
Aoki, Beatriz Bonadio; Fram, Dayana; Taminato, Mônica; Batista, Ruth Ester Sayad; Belasco, Angélica; Barbosa, Dulce Aparecida
2014-01-01
OBJECTIVES: to assess renal function in elderly patients undergoing contrast-enhanced computed tomography and identify the preventive measures of acute kidney injury in the period before and after the examination. METHOD: longitudinal cohort study conducted at the Federal University of São Paulo Hospital, from March 2011 to March 2013. All hospitalized elderly, of both sexes, aged 60 years and above, who performed the examination, were included (n=93). We collected sociodemographic data, data related to the examination and to the care provided, and creatinine values prior and post exam. RESULTS: an alteration in renal function was observed in 51 patients (54%) with a statistically significant increase of creatinine values (p<0.04), and two patients (4.0%) required hemodialysis. CONCLUSION: There is an urgent need for protocols prior to and post contrast-enhanced examination in the elderly, and other studies to verify the prognosis of this population. PMID:25296148
Peptides from Fish By-product Protein Hydrolysates and Its Functional Properties: an Overview.
Zamora-Sillero, Juan; Gharsallaoui, Adem; Prentice, Carlos
2018-04-01
The inadequate management of fish processing waste or by-products is one of the major problems that fish industry has to face nowadays. The mismanagement of this raw material leads to economic loss and environmental problems. The demand for the use of these by-products has led to the development of several processes in order to recover biomolecules from fish by-products. An efficient way to add value to fish waste protein is protein hydrolysis. Protein hydrolysates improve the functional properties and allow the release of peptides of different sizes with several bioactivities such as antioxidant, antimicrobial, antihypertensive, anti-inflammatory, or antihyperglycemic among others. This paper reviews different methods for the production of protein hydrolysates as well as current research about several fish by-products protein hydrolysates bioactive properties, aiming the dual objective: adding value to these underutilized by-products and minimizing their negative impact on the environment.
Characterizing the Mechanical Properties of Running-Specific Prostheses
Beck, Owen N.; Taboga, Paolo; Grabowski, Alena M.
2016-01-01
The mechanical stiffness of running-specific prostheses likely affects the functional abilities of athletes with leg amputations. However, each prosthetic manufacturer recommends prostheses based on subjective stiffness categories rather than performance based metrics. The actual mechanical stiffness values of running-specific prostheses (i.e. kN/m) are unknown. Consequently, we sought to characterize and disseminate the stiffness values of running-specific prostheses so that researchers, clinicians, and athletes can objectively evaluate prosthetic function. We characterized the stiffness values of 55 running-specific prostheses across various models, stiffness categories, and heights using forces and angles representative of those measured from athletes with transtibial amputations during running. Characterizing prosthetic force-displacement profiles with a 2nd degree polynomial explained 4.4% more of the variance than a linear function (p<0.001). The prosthetic stiffness values of manufacturer recommended stiffness categories varied between prosthetic models (p<0.001). Also, prosthetic stiffness was 10% to 39% less at angles typical of running 3 m/s and 6 m/s (10°-25°) compared to neutral (0°) (p<0.001). Furthermore, prosthetic stiffness was inversely related to height in J-shaped (p<0.001), but not C-shaped, prostheses. Running-specific prostheses should be tested under the demands of the respective activity in order to derive relevant characterizations of stiffness and function. In all, our results indicate that when athletes with leg amputations alter prosthetic model, height, and/or sagittal plane alignment, their prosthetic stiffness profiles also change; therefore variations in comfort, performance, etc. may be indirectly due to altered stiffness. PMID:27973573
Gao, Mingwu; Cheng, Hao-Min; Sung, Shih-Hsien; Chen, Chen-Huan; Olivier, Nicholas Bari; Mukkamala, Ramakrishna
2017-07-01
pulse transit time (PTT) varies with blood pressure (BP) throughout the cardiac cycle, yet, because of wave reflection, only one PTT value at the diastolic BP level is conventionally estimated from proximal and distal BP waveforms. The objective was to establish a technique to estimate multiple PTT values at different BP levels in the cardiac cycle. a technique was developed for estimating PTT as a function of BP (to indicate the PTT value for every BP level) from proximal and distal BP waveforms. First, a mathematical transformation from one waveform to the other is defined in terms of the parameters of a nonlinear arterial tube-load model accounting for BP-dependent arterial compliance and wave reflection. Then, the parameters are estimated by optimally fitting the waveforms to each other via the model-based transformation. Finally, PTT as a function of BP is specified by the parameters. The technique was assessed in animals and patients in several ways including the ability of its estimated PTT-BP function to serve as a subject-specific curve for calibrating PTT to BP. the calibration curve derived by the technique during a baseline period yielded bias and precision errors in mean BP of 5.1 ± 0.9 and 6.6 ± 1.0 mmHg, respectively, during hemodynamic interventions that varied mean BP widely. the new technique may permit, for the first time, estimation of PTT values throughout the cardiac cycle from proximal and distal waveforms. the technique could potentially be applied to improve arterial stiffness monitoring and help realize cuff-less BP monitoring.
Xu, Zeshui
2007-12-01
Interval utility values, interval fuzzy preference relations, and interval multiplicative preference relations are three common uncertain-preference formats used by decision-makers to provide their preference information in the process of decision making under fuzziness. This paper is devoted in investigating multiple-attribute group-decision-making problems where the attribute values are not precisely known but the value ranges can be obtained, and the decision-makers provide their preference information over attributes by three different uncertain-preference formats i.e., 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first utilize some functions to normalize the uncertain decision matrix and then transform it into an expected decision matrix. We establish a goal-programming model to integrate the expected decision matrix and all three different uncertain-preference formats from which the attribute weights and the overall attribute values of alternatives can be obtained. Then, we use the derived overall attribute values to get the ranking of the given alternatives and to select the best one(s). The model not only can reflect both the subjective considerations of all decision-makers and the objective information but also can avoid losing and distorting the given objective and subjective decision information in the process of information integration. Furthermore, we establish some models to solve the multiple-attribute group-decision-making problems with three different preference formats: 1) utility values; 2) fuzzy preference relations; and 3) multiplicative preference relations. Finally, we illustrate the applicability and effectiveness of the developed models with two practical examples.
Brown, G C
1999-01-01
OBJECTIVE: To determine the relationship of visual acuity loss to quality of life. DESIGN: Three hundred twenty-five patients with visual loss to a minimum of 20/40 or greater in at least 1 eye were interviewed in a standardized fashion using a modified VF-14, questionnaire. Utility values were also obtained using both the time trade-off and standard gamble methods of utility assessment. MAIN OUTCOME MEASURES: Best-corrected visual acuity was correlated with the visual function score on the modified VF-14 questionnaire, as well as with utility values obtained using both the time trade-off and standard gamble methods. RESULTS: Decreasing levels of vision in the eye with better acuity correlated directly with decreasing visual function scores on the modified VF-14 questionnaire, as did decreasing utility values using the time trade-off method of utility evaluation. The standard gamble method of utility evaluation was not as directly correlated with vision as the time trade-off method. Age, level of education, gender, race, length of time of visual loss, and the number of associated systemic comorbidities did not significantly affect the time trade-off utility values associated with visual loss in the better eye. The level of reduced vision in the better eye, rather than the specific disease process causing reduced vision, was related to mean utility values. The average person with 20/40 vision in the better seeing eye was willing to trade 2 of every 10 years of life in return for perfect vision (utility value of 0.8), while the average person with counting fingers vision in the better eye was willing to trade approximately 5 of every 10 remaining years of life (utility value of 0.52) in return for perfect vision. CONCLUSIONS: The time trade-off method of utility evaluation appears to be an effective method for assessing quality of life associated with visual loss. Time trade-off utility values decrease in direct conjunction with decreasing vision in the better-seeing eye. Unlike the modified VF-14 test and its counterparts, utility values allow the quality of life associated with visual loss to be more readily compared to the quality of life associated with other health (disease) states. This information can be employed for cost-effective analyses that objectively compare evidence-based medicine, patient-based preferences and sound econometric principles across all specialties in health care. PMID:10703139
Object Detection in Natural Backgrounds Predicted by Discrimination Performance and Models
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.; Watson, A. B.; Rohaly, A. M.; Null, Cynthia H. (Technical Monitor)
1995-01-01
In object detection, an observer looks for an object class member in a set of backgrounds. In discrimination, an observer tries to distinguish two images. Discrimination models predict the probability that an observer detects a difference between two images. We compare object detection and image discrimination with the same stimuli by: (1) making stimulus pairs of the same background with and without the target object and (2) either giving many consecutive trials with the same background (discrimination) or intermixing the stimuli (object detection). Six images of a vehicle in a natural setting were altered to remove the vehicle and mixed with the original image in various proportions. Detection observers rated the images for vehicle presence. Discrimination observers rated the images for any difference from the background image. Estimated detectabilities of the vehicles were found by maximizing the likelihood of a Thurstone category scaling model. The pattern of estimated detectabilities is similar for discrimination and object detection, and is accurately predicted by a Cortex Transform discrimination model. Predictions of a Contrast- Sensitivity- Function filter model and a Root-Mean-Square difference metric based on the digital image values are less accurate. The discrimination detectabilities averaged about twice those of object detection.
Cook, Sharon A; Rosser, Robert; Toone, Helen; James, M Ian; Salmon, Peter
2006-01-01
Elective cosmetic surgery is expanding in the UK in both the public and private sectors. Because resources are constrained, many cosmetic procedures are being excluded within the National Health Service. If guidelines on who can receive such surgery are to be evidence-based, information is needed about the level of dysfunction in patients referred for elective surgery and whether this is related to their degree of physical abnormality. Consecutive patients referred to a regional plastic surgery and burns unit for assessment for elective cosmetic surgery completed standardised measures of physical and psychosocial dysfunction, and indicated their perception of the degree of their abnormality and their preoccupation with it. We distinguished between patients referred for physical reasons or appearance reasons only, and compared levels of physical and psychosocial dysfunction in each with published values for community and clinical samples. Surgeons indicated patients' degree of objective abnormality, and we identified the relationship of dysfunction with perceived and objective abnormality and preoccupation. Whether patients sought surgery for physical or appearance reasons, physical function was normal. Those seeking surgery for appearance reasons only had moderate psychosocial dysfunction, but were not as impaired as clinical groups with psychological problems. Patients seeking the correction of minor skin lesions for purely appearance reasons reported excellent physical and psychosocial function. Level of function was related (negatively) to patients' preoccupation with abnormality rather than to their perceived or objective abnormality. In general, patients referred for elective cosmetic surgery did not present with significant levels of dysfunction. Moreover, levels of functioning were related to preoccupation rather than to objective abnormality. Therefore, for most patients, whether surgical treatment is generally appropriate is questionable. Future guidelines must seek to identify the small minority who do have a clinical need for surgery.
An improved 2D MoF method by using high order derivatives
NASA Astrophysics Data System (ADS)
Chen, Xiang; Zhang, Xiong
2017-11-01
The MoF (Moment of Fluid) method is one of the most accurate approaches among various interface reconstruction algorithms. Alike other second order methods, the MoF method needs to solve an implicit optimization problem to obtain the optimal approximate interface, so an iteration process is inevitable under most circumstances. In order to solve the optimization efficiently, the properties of the objective function are worthy of studying. In 2D problems, the first order derivative has been deduced and applied in the previous researches. In this paper, the high order derivatives of the objective function are deduced on the convex polygon. We show that the nth (n ≥ 2) order derivatives are discontinuous, and the number of the discontinuous points is two times the number of the polygon edge. A rotation algorithm is proposed to successively calculate these discontinuous points, thus the target interval where the optimal solution is located can be determined. Since the high order derivatives of the objective function are continuous in the target interval, the iteration schemes based on high order derivatives can be used to improve the convergence rate. Moreover, when iterating in the target interval, the value of objective function and its derivatives can be directly updated without explicitly solving the volume conservation equation. The direct update makes a further improvement of the efficiency especially when the number of edges of the polygon is increasing. The Halley's method, which is based on the first three order derivatives, is applied as the iteration scheme in this paper and the numerical results indicate that the CPU time is about half of the previous method on the quadrilateral cell and is about one sixth on the decagon cell.
Generating AN Optimum Treatment Plan for External Beam Radiation Therapy.
NASA Astrophysics Data System (ADS)
Kabus, Irwin
1990-01-01
The application of linear programming to the generation of an optimum external beam radiation treatment plan is investigated. MPSX, an IBM linear programming software package was used. All data originated from the CAT scan of an actual patient who was treated for a pancreatic malignant tumor before this study began. An examination of several alternatives for representing the cross section of the patient showed that it was sufficient to use a set of strategically placed points in the vital organs and tumor and a grid of points spaced about one half inch apart for the healthy tissue. Optimum treatment plans were generated from objective functions representing various treatment philosophies. The optimum plans were based on allowing for 216 external radiation beams which accounted for wedges of any size. A beam reduction scheme then reduced the number of beams in the optimum plan to a number of beams small enough for implementation. Regardless of the objective function, the linear programming treatment plan preserved about 95% of the patient's right kidney vs. 59% for the plan the hospital actually administered to the patient. The clinician, on the case, found most of the linear programming treatment plans to be superior to the hospital plan. An investigation was made, using parametric linear programming, concerning any possible benefits derived from generating treatment plans based on objective functions made up of convex combinations of two objective functions, however, this proved to have only limited value. This study also found, through dual variable analysis, that there was no benefit gained from relaxing some of the constraints on the healthy regions of the anatomy. This conclusion was supported by the clinician. Finally several schemes were found that, under certain conditions, can further reduce the number of beams in the final linear programming treatment plan.
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Blaschke, Thomas; Tiede, Dirk; Moghaddam, Mohammad Hossein Rezaei
2017-09-01
This article presents a method of object-based image analysis (OBIA) for landslide delineation and landslide-related change detection from multi-temporal satellite images. It uses both spatial and spectral information on landslides, through spectral analysis, shape analysis, textural measurements using a gray-level co-occurrence matrix (GLCM), and fuzzy logic membership functionality. Following an initial segmentation step, particular combinations of various information layers were investigated to generate objects. This was achieved by applying multi-resolution segmentation to IRS-1D, SPOT-5, and ALOS satellite imagery in sequential steps of feature selection and object classification, and using slope and flow direction derivatives from a digital elevation model together with topographically-oriented gray level co-occurrence matrices. Fuzzy membership values were calculated for 11 different membership functions using 20 landslide objects from a landslide training data. Six fuzzy operators were used for the final classification and the accuracies of the resulting landslide maps were compared. A Fuzzy Synthetic Evaluation (FSE) approach was adapted for validation of the results and for an accuracy assessment using the landslide inventory database. The FSE approach revealed that the AND operator performed best with an accuracy of 93.87% for 2005 and 94.74% for 2011, closely followed by the MEAN Arithmetic operator, while the OR and AND (*) operators yielded relatively low accuracies. An object-based change detection was then applied to monitor landslide-related changes that occurred in northern Iran between 2005 and 2011. Knowledge rules to detect possible landslide-related changes were developed by evaluating all possible landslide-related objects for both time steps.
Practical Strategies for Integrating Final Ecosystem Goods and ...
The concept of Final Ecosystem Goods and Services (FEGS) explicitly connects ecosystem services to the people that benefit from them. This report presents a number of practical strategies for incorporating FEGS, and more broadly ecosystem services, into the decision-making process. Whether a decision process is in early or late stages, or whether a process includes informal or formal decision analysis, there are multiple points where ecosystem services concepts can be integrated. This report uses Structured Decision Making (SDM) as an organizing framework to illustrate the role ecosystem services can play in a values-focused decision-process, including: • Clarifying the decision context: Ecosystem services can help clarify the potential impacts of an issue on natural resources together with their spatial and temporal extent based on supply and delivery of those services, and help identify beneficiaries for inclusion as stakeholders in the deliberative process. • Defining objectives and performance measures: Ecosystem services may directly represent stakeholder objectives, or may be means toward achieving other objectives. • Creating alternatives: Ecosystem services can bring to light creative alternatives for achieving other social, economic, health, or general well-being objectives. • Estimating consequences: Ecosystem services assessments can implement ecological production functions (EPFs) and ecological benefits functions (EBFs) to link decision alt
Lu, Huancai; Wu, Sean F
2009-03-01
The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined.
Stochastic Optimization for Nuclear Facility Deployment Scenarios
NASA Astrophysics Data System (ADS)
Hays, Ross Daniel
Single-use, low-enriched uranium oxide fuel, consumed through several cycles in a light-water reactor (LWR) before being disposed, has become the dominant source of commercial-scale nuclear electric generation in the United States and throughout the world. However, it is not without its drawbacks and is not the only potential nuclear fuel cycle available. Numerous alternative fuel cycles have been proposed at various times which, through the use of different reactor and recycling technologies, offer to counteract many of the perceived shortcomings with regards to waste management, resource utilization, and proliferation resistance. However, due to the varying maturity levels of these technologies, the complicated material flow feedback interactions their use would require, and the large capital investments in the current technology, one should not deploy these advanced designs without first investigating the potential costs and benefits of so doing. As the interactions among these systems can be complicated, and the ways in which they may be deployed are many, the application of automated numerical optimization to the simulation of the fuel cycle could potentially be of great benefit to researchers and interested policy planners. To investigate the potential of these methods, a computational program has been developed that applies a parallel, multi-objective simulated annealing algorithm to a computational optimization problem defined by a library of relevant objective functions applied to the Ver ifiable Fuel Cycle Simulati on Model (VISION, developed at the Idaho National Laboratory). The VISION model, when given a specified fuel cycle deployment scenario, computes the numbers and types of, and construction, operation, and utilization schedules for, the nuclear facilities required to meet a predetermined electric power demand function. Additionally, it calculates the location and composition of the nuclear fuels within the fuel cycle, from initial mining through to eventual disposal. By varying the specifications of the deployment scenario, the simulated annealing algorithm will seek to either minimize the value of a single objective function, or enumerate the trade-off surface between multiple competing objective functions. The available objective functions represent key stakeholder values, minimizing such important factors as high-level waste disposal burden, required uranium ore supply, relative proliferation potential, and economic cost and uncertainty. The optimization program itself is designed to be modular, allowing for continued expansion and exploration as research needs and curiosity indicate. The utility and functionality of this optimization program are demonstrated through its application to one potential fuel cycle scenario of interest. In this scenario, an existing legacy LWR fleet is assumed at the year 2000. The electric power demand grows exponentially at a rate of 1.8% per year through the year 2100. Initially, new demand is met by the construction of 1-GW(e) LWRs. However, beginning in the year 2040, 600-MW(e) sodium-cooled, fast-spectrum reactors operating in a transuranic burning regime with full recycling of spent fuel become available to meet demand. By varying the fraction of new capacity allocated to each reactor type, the optimization program is able to explicitly show the relationships that exist between uranium utilization, long-term heat for geologic disposal, and cost-of-electricity objective functions. The trends associated with these trade-off surfaces tend to confirm many common expectations about the use of nuclear power, namely that while overall it is quite insensitive to variations in the cost of uranium ore, it is quite sensitive to changes in the capital costs of facilities. The optimization algorithm has shown itself to be robust and extensible, with possible extensions to many further fuel cycle optimization problems of interest.
A no-reference video quality assessment metric based on ROI
NASA Astrophysics Data System (ADS)
Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan
2015-01-01
A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.
An objective measure of physical function of elderly outpatients. The Physical Performance Test.
Reuben, D B; Siu, A L
1990-10-01
Direct observation of physical function has the advantage of providing an objective, quantifiable measure of functional capabilities. We have developed the Physical Performance Test (PPT), which assesses multiple domains of physical function using observed performance of tasks that simulate activities of daily living of various degrees of difficulty. Two versions are presented: a nine-item scale that includes writing a sentence, simulated eating, turning 360 degrees, putting on and removing a jacket, lifting a book and putting it on a shelf, picking up a penny from the floor, a 50-foot walk test, and climbing stairs (scored as two items); and a seven-item scale that does not include stairs. The PPT can be completed in less than 10 minutes and requires only a few simple props. We then tested the validity of PPT using 183 subjects (mean age, 79 years) in six settings including four clinical practices (one of Parkinson's disease patients), a board-and-care home, and a senior citizens' apartment. The PPT was reliable (Cronbach's alpha = 0.87 and 0.79, interrater reliability = 0.99 and 0.93 for the nine-item and seven-item tests, respectively) and demonstrated concurrent validity with self-reported measures of physical function. Scores on the PPT for both scales were highly correlated (.50 to .80) with modified Rosow-Breslau, Instrumental and Basic Activities of Daily Living scales, and Tinetti gait score. Scores on the PPT were more moderately correlated with self-reported health status, cognitive status, and mental health (.24 to .47), and negatively with age (-.24 and -.18). Thus, the PPT also demonstrated construct validity. The PPT is a promising objective measurement of physical function, but its clinical and research value for screening, monitoring, and prediction will have to be determined.
Polymer tensiometer with ceramic cones: a case study for a Brazilian soil.
NASA Astrophysics Data System (ADS)
Durigon, A.; de Jong van Lier, Q.; van der Ploeg, M. J.; Gooren, H. P. A.; Metselaar, K.; de Rooij, G. H.
2009-04-01
Laboratory outflow experiments, in combination with inverse modeling techniques, allow to simultaneously determine retention and hydraulic conductivity functions. A numerical model solves the pressure-head-based form of the Richards' equation for unsaturated flow in a rigid porous medium. Applying adequate boundary conditions, the cumulative outflow is calculated at prescribed times, and as a function of the set of optimized parameters. These parameters are evaluated by nonlinear least-squares fitting of predicted to observed cumulative outflow with time. An objective function quantifies this difference between calculated and observed cumulative outflow and between predicted and measured soil water retention data. Using outflow data only in the objective function, the multistep outflow method results in unique estimates of the retention and hydraulic conductivity functions. To obtain more reliable estimates of the hydraulic conductivity as a function of the water content using the inverse method, the outflow data must be supplemented with soil retention data. To do so tensiometers filled with a polymer solution instead of water were used. The measurement range of these tensiometers is larger than that of the conventional tensiometers, being able to measure the entire pressure head range over which crops take up water, down to values in the order of -1.6 MPa. The objective of this study was to physically characterize a Brazilian red-yellow oxisol using measurements in outflow experiments by polymer tensiometers and processing these data with the inverse modeling technique for use in the analysis of a field experiment and in modeling. The soil was collected at an experimental site located in Piracicaba, Brazil, 22° 42 S, 47° 38 W, 550 m above sea level.
Carden, Tony; Goode, Natassia; Read, Gemma J M; Salmon, Paul M
2017-03-15
Like most work systems, the domain of adventure activities has seen a series of serious incidents and subsequent calls to improve regulation. Safety regulation systems aim to promote safety and reduce accidents. However, there is scant evidence they have led to improved safety outcomes. In fact there is some evidence that the poor integration of regulatory system components has led to adverse safety outcomes in some contexts. Despite this, there is an absence of methods for evaluating regulatory and compliance systems. This article argues that sociotechnical systems theory and methods provide a suitable framework for evaluating regulatory systems. This is demonstrated through an analysis of a recently introduced set of adventure activity regulations. Work Domain Analysis (WDA) was used to describe the regulatory system in terms of its functional purposes, values and priority measures, purpose-related functions, object-related processes and cognitive objects. This allowed judgement to be made on the nature of the new regulatory system and on the constraints that may impact its efficacy following implementation. Importantly, the analysis suggests that the new system's functional purpose of ensuring safe activities is not fully supported in terms of the functions and objects available to fulfil them. Potential improvements to the design of the system are discussed along with the implications for regulatory system design and evaluation across the safety critical domains generally. Copyright © 2017 Elsevier Ltd. All rights reserved.
Improving building performance using smart building concept: Benefit cost ratio comparison
NASA Astrophysics Data System (ADS)
Berawi, Mohammed Ali; Miraj, Perdana; Sayuti, Mustika Sari; Berawi, Abdur Rohim Boy
2017-11-01
Smart building concept is an implementation of technology developed in the construction industry throughout the world. However, the implementation of this concept is still below expectations due to various obstacles such as higher initial cost than a conventional concept and existing regulation siding with the lowest cost in the tender process. This research aims to develop intelligent building concept using value engineering approach to obtain added value regarding quality, efficiency, and innovation. The research combined quantitative and qualitative approach using questionnaire survey and value engineering method to achieve the research objectives. The research output will show additional functions regarding technology innovation that may increase the value of a building. This study shows that smart building concept requires higher initial cost, but produces lower operational and maintenance costs. Furthermore, it also confirms that benefit-cost ratio on the smart building was much higher than a conventional building, that is 1.99 to 0.88.
Scalable Machine Learning for Massive Astronomical Datasets
NASA Astrophysics Data System (ADS)
Ball, Nicholas M.; Gray, A.
2014-04-01
We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.
Scalable Machine Learning for Massive Astronomical Datasets
NASA Astrophysics Data System (ADS)
Ball, Nicholas M.; Astronomy Data Centre, Canadian
2014-01-01
We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors, and the local outlier factor. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.
A Regionalization Approach to select the final watershed parameter set among the Pareto solutions
NASA Astrophysics Data System (ADS)
Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.
2017-12-01
The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.
Prediction errors to emotional expressions: the roles of the amygdala in social referencing.
Meffert, Harma; Brislin, Sarah J; White, Stuart F; Blair, James R
2015-04-01
Social referencing paradigms in humans and observational learning paradigms in animals suggest that emotional expressions are important for communicating valence. It has been proposed that these expressions initiate stimulus-reinforcement learning. Relatively little is known about the role of emotional expressions in reinforcement learning, particularly in the context of social referencing. In this study, we examined object valence learning in the context of a social referencing paradigm. Participants viewed objects and faces that turned toward the objects and displayed a fearful, happy or neutral reaction to them, while judging the gender of these faces. Notably, amygdala activation was larger when the expressions following an object were less expected. Moreover, when asked, participants were both more likely to want to approach, and showed stronger amygdala responses to, objects associated with happy relative to objects associated with fearful expressions. This suggests that the amygdala plays two roles in social referencing: (i) initiating learning regarding the valence of an object as a function of prediction errors to expressions displayed toward this object and (ii) orchestrating an emotional response to the object when value judgments are being made regarding this object. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.
Embedding Human Expert Cognition Into Autonomous UAS Trajectory Planning.
Narayan, Pritesh; Meyer, Patrick; Campbell, Duncan
2013-04-01
This paper presents a new approach for the inclusion of human expert cognition into autonomous trajectory planning for unmanned aerial systems (UASs) operating in low-altitude environments. During typical UAS operations, multiple objectives may exist; therefore, the use of multicriteria decision aid techniques can potentially allow for convergence to trajectory solutions which better reflect overall mission requirements. In that context, additive multiattribute value theory has been applied to optimize trajectories with respect to multiple objectives. A graphical user interface was developed to allow for knowledge capture from a human decision maker (HDM) through simulated decision scenarios. The expert decision data gathered are converted into value functions and corresponding criteria weightings using utility additive theory. The inclusion of preferences elicited from HDM data within an automated decision system allows for the generation of trajectories which more closely represent the candidate HDM decision preferences. This approach has been demonstrated in this paper through simulation using a fixed-wing UAS operating in low-altitude environments.
Algorithms for Learning Preferences for Sets of Objects
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; desJardins, Marie; Eaton, Eric
2010-01-01
A method is being developed that provides for an artificial-intelligence system to learn a user's preferences for sets of objects and to thereafter automatically select subsets of objects according to those preferences. The method was originally intended to enable automated selection, from among large sets of images acquired by instruments aboard spacecraft, of image subsets considered to be scientifically valuable enough to justify use of limited communication resources for transmission to Earth. The method is also applicable to other sets of objects: examples of sets of objects considered in the development of the method include food menus, radio-station music playlists, and assortments of colored blocks for creating mosaics. The method does not require the user to perform the often-difficult task of quantitatively specifying preferences; instead, the user provides examples of preferred sets of objects. This method goes beyond related prior artificial-intelligence methods for learning which individual items are preferred by the user: this method supports a concept of setbased preferences, which include not only preferences for individual items but also preferences regarding types and degrees of diversity of items in a set. Consideration of diversity in this method involves recognition that members of a set may interact with each other in the sense that when considered together, they may be regarded as being complementary, redundant, or incompatible to various degrees. The effects of such interactions are loosely summarized in the term portfolio effect. The learning method relies on a preference representation language, denoted DD-PREF, to express set-based preferences. In DD-PREF, a preference is represented by a tuple that includes quality (depth) functions to estimate how desired a specific value is, weights for each feature preference, the desired diversity of feature values, and the relative importance of diversity versus depth. The system applies statistical concepts to estimate quantitative measures of the user s preferences from training examples (preferred subsets) specified by the user. Once preferences have been learned, the system uses those preferences to select preferred subsets from new sets. The method was found to be viable when tested in computational experiments on menus, music playlists, and rover images. Contemplated future development efforts include further tests on more diverse sets and development of a sub-method for (a) estimating the parameter that represents the relative importance of diversity versus depth, and (b) incorporating background knowledge about the nature of quality functions, which are special functions that specify depth preferences for features.
Chung, Byunghoon; Lee, Hun; Choi, Bong Joon; Seo, Kyung Ryul; Kim, Eung Kwon; Kim, Dae Yune; Kim, Tae-Im
2017-02-01
The purpose of this study was to investigate the clinical efficacy of an optimized prolate ablation procedure for correcting residual refractive errors following laser surgery. We analyzed 24 eyes of 15 patients who underwent an optimized prolate ablation procedure for the correction of residual refractive errors following laser in situ keratomileusis, laser-assisted subepithelial keratectomy, or photorefractive keratectomy surgeries. Preoperative ophthalmic examinations were performed, and uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction values (sphere, cylinder, and spherical equivalent), point spread function, modulation transfer function, corneal asphericity (Q value), ocular aberrations, and corneal haze measurements were obtained postoperatively at 1, 3, and 6 months. Uncorrected distance visual acuity improved and refractive errors decreased significantly at 1, 3, and 6 months postoperatively. Total coma aberration increased at 3 and 6 months postoperatively, while changes in all other aberrations were not statistically significant. Similarly, no significant changes in point spread function were detected, but modulation transfer function increased significantly at the postoperative time points measured. The optimized prolate ablation procedure was effective in terms of improving visual acuity and objective visual performance for the correction of persistent refractive errors following laser surgery.
Calibration of HEC-Ras hydrodynamic model using gauged discharge data and flood inundation maps
NASA Astrophysics Data System (ADS)
Tong, Rui; Komma, Jürgen
2017-04-01
The estimation of flood is essential for disaster alleviation. Hydrodynamic models are implemented to predict the occurrence and variance of flood in different scales. In practice, the calibration of hydrodynamic models aims to search the best possible parameters for the representation the natural flow resistance. Recent years have seen the calibration of hydrodynamic models being more actual and faster following the advance of earth observation products and computer based optimization techniques. In this study, the Hydrologic Engineering River Analysis System (HEC-Ras) model was set up with high-resolution digital elevation model from Laser scanner for the river Inn in Tyrol, Austria. 10 largest flood events from 19 hourly discharge gauges and flood inundation maps were selected to calibrate the HEC-Ras model. Manning roughness values and lateral inflow factors as parameters were automatically optimized with the Shuffled complex with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complex Evolution (SCE-UA). Different objective functions (Nash-Sutcliffe model efficiency coefficient, the timing of peak, peak value and Root-mean-square deviation) were used in single or multiple way. It was found that the lateral inflow factor was the most sensitive parameter. SP-UCI algorithm could avoid the local optimal and achieve efficient and effective parameters in the calibration of HEC-Ras model using flood extension images. As results showed, calibration by means of gauged discharge data and flood inundation maps, together with objective function of Nash-Sutcliffe model efficiency coefficient, was very robust to obtain more reliable flood simulation, and also to catch up with the peak value and the timing of peak.
Multi-sensor image fusion algorithm based on multi-objective particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Xie, Xia-zhu; Xu, Ya-wei
2017-11-01
On the basis of DT-CWT (Dual-Tree Complex Wavelet Transform - DT-CWT) theory, an approach based on MOPSO (Multi-objective Particle Swarm Optimization Algorithm) was proposed to objectively choose the fused weights of low frequency sub-bands. High and low frequency sub-bands were produced by DT-CWT. Absolute value of coefficients was adopted as fusion rule to fuse high frequency sub-bands. Fusion weights in low frequency sub-bands were used as particles in MOPSO. Spatial Frequency and Average Gradient were adopted as two kinds of fitness functions in MOPSO. The experimental result shows that the proposed approach performances better than Average Fusion and fusion methods based on local variance and local energy respectively in brightness, clarity and quantitative evaluation which includes Entropy, Spatial Frequency, Average Gradient and QAB/F.
Environment Modeling Using Runtime Values for JPF-Android
NASA Technical Reports Server (NTRS)
van der Merwe, Heila; Tkachuk, Oksana; Nel, Seal; van der Merwe, Brink; Visser, Willem
2015-01-01
Software applications are developed to be executed in a specific environment. This environment includes external native libraries to add functionality to the application and drivers to fire the application execution. For testing and verification, the environment of an application is simplified abstracted using models or stubs. Empty stubs, returning default values, are simple to generate automatically, but they do not perform well when the application expects specific return values. Symbolic execution is used to find input parameters for drivers and return values for library stubs, but it struggles to detect the values of complex objects. In this work-in-progress paper, we explore an approach to generate drivers and stubs based on values collected during runtime instead of using default values. Entry-points and methods that need to be modeled are instrumented to log their parameters and return values. The instrumented applications are then executed using a driver and instrumented libraries. The values collected during runtime are used to generate driver and stub values on- the-fly that improve coverage during verification by enabling the execution of code that previously crashed or was missed. We are implementing this approach to improve the environment model of JPF-Android, our model checking and analysis tool for Android applications.
ERIC Educational Resources Information Center
Puig, J.; Echarri, F.
2018-01-01
A primary aim of environmental education is to promote environmental values. Significant life experiences (SLE) are a powerful, fast and long-lasting way to achieve this objective, but they have received little scholarly attention thus far. As examples to help us characterize SLE and understand their function, the cases of three well-known…
Hyun, Seung Won; Wong, Weng Kee
2016-01-01
We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs. PMID:26565557
Hyun, Seung Won; Wong, Weng Kee
2015-11-01
We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs.
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
Strength characterization of knee flexor and extensor muscles in Prader-Willi and obese patients.
Capodaglio, Paolo; Vismara, Luca; Menegoni, Francesco; Baccalaro, Gabriele; Galli, Manuela; Grugni, Graziano
2009-05-06
despite evidence of an obesity-related disability, there is a lack of objective muscle functional data in overweight subjects. Only few studies provide instrumental strength measurements in non-syndromal obesity, whereas no data about Prader-Willi syndrome (PWS) are reported. The aim of our study was to characterize the lower limb muscle function of patients affected by PWS as compared to non-syndromal obesity and normal-weight subjects. We enrolled 20 obese (O) females (age: 29.1 +/- 6.5 years; BMI: 38.1 +/- 3.1), 6 PWS females (age: 27.2 +/- 4.9 years; BMI: 45.8 +/- 4.4) and 14 healthy normal-weight (H) females (age: 30.1 +/- 4.7 years; BMI: 21 +/- 1.6). Isokinetic strength during knee flexion and extension in both lower limbs at the fixed angular velocities of 60 degrees /s, 180 degrees /s, 240 degrees /s was measured with a Cybex Norm dynamometer. the H, O and PWS populations appear to be clearly stratified with regard to muscle strength.: PWS showed the lowest absolute peak torque (PT) for knee flexor and extensor muscles as compared to O (-55%) and H (-47%) (P = 0.00001). O showed significantly higher strength values than H as regard to knee extension only (P = 0.0014). When strength data were normalised by body weight, PWS showed a 50% and a 70% reduction in PT as compared to O and H, respectively. Knee flexors strength values were on average half of those reported for extension in all of the three populations. the novel aspect of our study is the determination of objective measures of muscle strength in PWS and the comparison with O and H patients. The objective characterization of muscle function performed in this study provides baseline and outcome measures that may quantify specific strength deficits amendable with tailored rehabilitation programs and monitor effectiveness of treatments.
Strength characterization of knee flexor and extensor muscles in Prader-Willi and obese patients
Capodaglio, Paolo; Vismara, Luca; Menegoni, Francesco; Baccalaro, Gabriele; Galli, Manuela; Grugni, Graziano
2009-01-01
Background despite evidence of an obesity-related disability, there is a lack of objective muscle functional data in overweight subjects. Only few studies provide instrumental strength measurements in non-syndromal obesity, whereas no data about Prader-Willi syndrome (PWS) are reported. The aim of our study was to characterize the lower limb muscle function of patients affected by PWS as compared to non-syndromal obesity and normal-weight subjects. Methods We enrolled 20 obese (O) females (age: 29.1 ± 6.5 years; BMI: 38.1 ± 3.1), 6 PWS females (age: 27.2 ± 4.9 years; BMI: 45.8 ± 4.4) and 14 healthy normal-weight (H) females (age: 30.1 ± 4.7 years; BMI: 21 ± 1.6). Isokinetic strength during knee flexion and extension in both lower limbs at the fixed angular velocities of 60°/s, 180°/s, 240°/s was measured with a Cybex Norm dynamometer. Results the H, O and PWS populations appear to be clearly stratified with regard to muscle strength.: PWS showed the lowest absolute peak torque (PT) for knee flexor and extensor muscles as compared to O (-55%) and H (-47%) (P = 0.00001). O showed significantly higher strength values than H as regard to knee extension only (P = 0.0014). When strength data were normalised by body weight, PWS showed a 50% and a 70% reduction in PT as compared to O and H, respectively. Knee flexors strength values were on average half of those reported for extension in all of the three populations. Conclusion the novel aspect of our study is the determination of objective measures of muscle strength in PWS and the comparison with O and H patients. The objective characterization of muscle function performed in this study provides baseline and outcome measures that may quantify specific strength deficits amendable with tailored rehabilitation programs and monitor effectiveness of treatments. PMID:19419559
Effects of environmental ozone on the lung function of senior citizens
NASA Astrophysics Data System (ADS)
Höppe, Peter; Lindner, Jutta; Praml, Georg; Brönner, Norman
1995-09-01
Measurements with a body plethysmograph of lung function parameters and reports of unusual complaints or irritations were taken from 41 senior citizens in the situations where they usually spend their daytime hours. The subjects belonged to a group commonly assumed to be at risk from ozone. Each subject was examined on 8 days both in the morning and in the afternoon. The object was to obtain for every subject an equal distribution of measuring days between those with elevated ozone concentrations (maximum 0.5 h mean values between 1.00 and 4.00 p.m. of at least 0.050 ppm) and those with low ozone concentrations (maximum 0.5 h mean values between 1.00 and 4.00 p.m. of at most 0.040 ppm). The results showed no relevant ozone related effects on the lung function parameters or the subjective reports of irritations. Thus there was no indication that senior citizens represent a group at particular risk with respect to moderately elevated concentrations of environmental ozone, as occur in central Europe.
Tumor necrosis factor alpha and pulmonary function in Saskatchewan grain handlers.
McDuffie, Helen H; Nakagawa, Kazuko; Pahwa, Punam; Shindo, Junichi; Hashimoto, Mirai; Nakada, Naoyuki; Ghosh, Sunita; Kirychuk, Shelley P; Hucl, Pierre
2006-05-01
The objective of this study was to estimate the contribution of lifestyle (cigarettes) and tumor necrosis factor (TNF) alpha polymorphisms at position 308 of the tumor necrosis factor alpha gene promotor (TNF-308*1/*2) to pulmonary function among grain handlers. Employed male grain handlers (157) provided occupational and respiratory symptom information, pulmonary function measurements, and DNA for genotyping. The genotypes of 101 were TNF-308*1/*1, 47 were *1/*2, and nine were *2/*2. Current smokers whose genotype was *2/*2 or *1/*2 had lower values compared with other combinations of genotype and smoking status. Among *1/*1 homozygotes, current smokers had better percent of predicted forced expiratory volume in 1 second (P = 0.04) mean values than nonsmokers and better percent of predicted forced vital capacity than exsmokers (P = 0.017) or nonsmokers (P = 0.008). These results indicate the complexity of determining which workers will develop acute and chronic adverse pulmonary conditions in response to exposure to grain dust and the toxins in cigarette smoke interacting with their genotype.
Inagawa, H.; Toratani, Y.; Motohashi, K.; Nakamura, I.; Matsushita, M.; Fujiyoshi, S.
2015-01-01
We have developed a cryogenic fluorescence microscope system, the core of which is a reflecting objective that consists of spherical and aspherical mirrors. The use of an aspherical mirror allows the reflecting objective to have a numerical aperture (NA) of up to 0.99, which is close to the maximum possible NA of 1.03 in superfluid helium. The performance of the system at a temperature of 1.7 K was tested by recording a three-dimensional fluorescence image of individual quantum dots using excitation wavelengths (λex) of 532 nm and 635 nm. At 1.7 K, the microscope worked with achromatic and nearly diffraction-limited performance. The 1/e2 radius (Γ) of the point spread function of the reflecting objective in the lateral (xy) direction was 0.212 ± 0.008 μm at λex = 532 nm and was less than 1.2 times the simulated value for a perfectly polished objective. The radius Γ in the axial (z) direction was 0.91 ± 0.04 μm at λex = 532 nm and was less than 1.4 times the simulated value of Γ. The chromatic aberrations between the two wavelengths were one order of magnitude smaller than Γ in each direction. PMID:26239746
Moser, Eileen M; Huang, Grace C; Packer, Clifford D; Glod, Susan; Smith, Cynthia D; Alguire, Patrick C; Fazio, Sara B
2016-03-01
Medical students must learn how to practice high-value, cost-conscious care. By modifying the traditional SOAP (Subjective-Objective-Assessment-Plan) presentation to include a discussion of value (SOAP-V), we developed a cognitive forcing function designed to promote discussion of high-value, cost-conscious care during patient delivery. The SOAP-V model prompts the student to consider (1) the evidence that supports a test or treatment, (2) the patient's preferences and values, and (3) the financial cost of a test or treatment compared to alternatives. Students report their findings to their teams during patient care rounds. This tool has been successfully used at 3 medical schools. Preliminary results find that students who have been trained in SOAP-V feel more empowered to address the economic healthcare crisis, are more comfortable in initiating discussions about value, and are more likely to consider potential costs to the healthcare system. © 2015 Society of Hospital Medicine.
Affective and contextual values modulate spatial frequency use in object recognition
Caplette, Laurent; West, Gregory; Gomot, Marie; Gosselin, Frédéric; Wicker, Bruno
2014-01-01
Visual object recognition is of fundamental importance in our everyday interaction with the environment. Recent models of visual perception emphasize the role of top-down predictions facilitating object recognition via initial guesses that limit the number of object representations that need to be considered. Several results suggest that this rapid and efficient object processing relies on the early extraction and processing of low spatial frequencies (LSF). The present study aimed to investigate the SF content of visual object representations and its modulation by contextual and affective values of the perceived object during a picture-name verification task. Stimuli consisted of pictures of objects equalized in SF content and categorized as having low or high affective and contextual values. To access the SF content of stored visual representations of objects, SFs of each image were then randomly sampled on a trial-by-trial basis. Results reveal that intermediate SFs between 14 and 24 cycles per object (2.3–4 cycles per degree) are correlated with fast and accurate identification for all categories of objects. Moreover, there was a significant interaction between affective and contextual values over the SFs correlating with fast recognition. These results suggest that affective and contextual values of a visual object modulate the SF content of its internal representation, thus highlighting the flexibility of the visual recognition system. PMID:24904514
Elements de conception d'un systeme geothermique hybride par optimisation financiere
NASA Astrophysics Data System (ADS)
Henault, Benjamin
The choice of design parameters for a hybrid geothermal system is usually based on current practices or questionable assumptions. In fact, the main purpose of a hybrid geothermal system is to maximize the energy savings associated with heating and cooling requirements while minimizing the costs of operation and installation. This thesis presents a strategy to maximize the net present value of a hybrid geothermal system. This objective is expressed by a series of equations that lead to a global objective function. Iteratively, the algorithm converges to an optimal solution by using an optimization method: the conjugate gradient combined with a combinatorial method. The objective function presented in this paper makes use of a simulation algorithm for predicting the fluid temperature of a hybrid geothermal system on an hourly basis. Thus, the optimization method selects six variables iteratively, continuous and integer type, affecting project costs and energy savings. These variables are the limit temperature at the entry of the heat pump (geothermal side), the number of heat pumps, the number of geothermal wells and the distance in X and Y between the geothermal wells. Generally, these variables have a direct impact on the cost of the installation, on the entering water temperature at the heat pumps, the cost of equipment, the thermal interference between boreholes, the total capacity of geothermal system, on system performance, etc. On the other hand, the arrangement of geothermal wells is variable and is often irregular depending on the number of selected boreholes by the algorithm. Removal or addition of one or more borehole is guided by a predefined order dicted by the designer. This feature of irregular arrangement represents an innovation in the field and is necessary for the operation of this algorithm. Indeed, this ensures continuity between the number of boreholes allowing the use of the conjugate gradient method. The proposed method provides as outputs the net present value of the optimal solution, the position of the vertical boreholes, the number of installed heat pumps, the limits of entering water temperature at the heat pumps and energy consumption of the hybrid geothermal system. To demonstrate the added value of this design method, two case studies are analyzed, for a commercial building and a residential. The two studies allow to conclude that: the net present value of hybrid geothermal systems can be significantly improved by the choice of right specifications; the economic value of a geothermal project is strongly influenced by the number of heat pumps and the number of geothermal wells or the temperature limit in heating mode; the choice of design parameters should always be driven by an objective function and not by the designer; peak demand charges favor hybrid geothermal systems with a higher capacity. Then, in order to validate the operation, this new design method is compared to the standard sizing method which is commonly used. By designing the hybrid geothermal system according to standard sizing method and to meet 70% of peak heating, the net present value over 20 years for the residential project is negative, at -61,500 while it is 43,700 for commercial hybrid geothermal system. Using the new design method presented in this thesis, the net present values of projects are respectively 162,000 and 179,000. The use of this algorithm is beneficial because it significantly increases the net present value of projects. The research presented in this thesis allows to optimize the financial performance of hybrid geothermal systems. The proposed method will allow industry stakeholders to increase the profitability of their projects associated with low temperature geothermal energy.
Lumb, Ashok; Halliwell, Doug; Sharma, Tribeni
2006-02-01
All six ecosystem initiatives evolved from many years of federal, provincial, First Nation, local government and community attention to the stresses on sensitive habitats and species, air and water quality, and the consequent threats to community livability. This paper assesses water quality aspect for the ecosystem initiatives and employs newly developed Canadian Council of Ministers of the Environment Water Quality Index (CCME WQI) which provides a convenient mean of summarizing complex water quality data that can be easily understood by the public, water distributors, planners, managers and policy makers. The CCME WQI incorporates three elements: Scope - the number of water quality parameters (variables) not meeting water quality objectives (F(1)); Frequency - the number of times the objectives are not met (F(2)); and Amplitude. the extent to which the objectives are not met (F(3)). The index produces a number between 0 (worst) to 100 (best) to reflect the water quality. This study evaluates water quality of the Mackenzie - Great Bear sub-basin by employing two modes of objective functions (threshold values): one based on the CCME water quality guidelines and the other based on site-specific values that were determined by the statistical analysis of the historical data base. Results suggest that the water quality of the Mackenzie-Great Bear sub-basin is impacted by high turbidity and total (mostly particulate) trace metals due to high suspended sediment loads during the open water season. Comments are also provided on water quality and human health issues in the Mackenzie basin based on the findings and the usefulness of CCME water quality guidelines and site specific values.
FRAGMENTATION AND EVOLUTION OF MOLECULAR CLOUDS. II. THE EFFECT OF DUST HEATING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urban, Andrea; Evans, Neal J.; Martel, Hugo
2010-02-20
We investigate the effect of heating by luminosity sources in a simulation of clustered star formation. Our heating method involves a simplified continuum radiative transfer method that calculates the dust temperature. The gas temperature is set by the dust temperature. We present the results of four simulations; two simulations assume an isothermal equation of state and the two other simulations include dust heating. We investigate two mass regimes, i.e., 84 M{sub sun} and 671 M{sub sun}, using these two different energetics algorithms. The mass functions for the isothermal simulations and simulations that include dust heating are drastically different. In themore » isothermal simulation, we do not form any objects with masses above 1 M{sub sun}. However, the simulation with dust heating, while missing some of the low-mass objects, forms high-mass objects ({approx}20 M{sub sun}) which have a distribution similar to the Salpeter initial mass function. The envelope density profiles around the stars formed in our simulation match observed values around isolated, low-mass star-forming cores. We find the accretion rates to be highly variable and, on average, increasing with final stellar mass. By including radiative feedback from stars in a cluster-scale simulation, we have determined that it is a very important effect which drastically affects the mass function and yields important insights into the formation of massive stars.« less
Scenario generation for stochastic optimization problems via the sparse grid method
Chen, Michael; Mehrotra, Sanjay; Papp, David
2015-04-19
We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less
Sturkenboom, Ingrid H W M; Graff, Maud J; Borm, George F; Adang, Eddy M M; Nijhuis-van der Sanden, Maria W G; Bloem, Bastiaan R; Munneke, Marten
2013-02-02
Occupational therapists may have an added value in the care of patients with Parkinson's disease whose daily functioning is compromised, as well as for their immediate caregivers. Evidence for this added value is inconclusive due to a lack of rigorous studies. The aim of this trial is to evaluate the (cost) effectiveness of occupational therapy in improving daily functioning of patients with Parkinson's disease. A multicenter, assessor-blinded, two-armed randomized controlled clinical trial will be conducted, with evaluations at three and six months. One hundred ninety-two home-dwelling patients with Parkinson's disease and with an occupational therapy indication will be assigned to the experimental group or to the control group (2:1). Patients and their caregivers in the experimental group will receive ten weeks of home-based occupational therapy according to recent Dutch guidelines. The intervention will be delivered by occupational therapists who have been specifically trained to treat patients according to these guidelines. Participants in the control group will not receive occupational therapy during the study period. The primary outcome for the patient is self-perceived daily functioning at three months, assessed with the Canadian Occupational Performance Measure. Secondary patient-related outcomes include: objective performance of daily activities, self-perceived satisfaction with performance in daily activities, participation, impact of fatigue, proactive coping skills, health-related quality of life, overall quality of life, health-related costs, and effectiveness at six months. All outcomes at the caregiver level will be secondary and will include self-perceived burden of care, objective burden of care, proactive coping skills, overall quality of life, and care-related costs. Effectiveness will be evaluated using a covariance analysis of the difference in outcome at three months. An economic evaluation from a societal perspective will be conducted, as well as a process evaluation. This is the first large-scale trial specifically evaluating occupational therapy in Parkinson's disease. It is expected to generate important new information about the possible added value of occupational therapy on daily functioning of patients with Parkinson's disease. Clinicaltrials.gov: NCT01336127.
Arana, F Sergio; Parkinson, John A; Hinton, Elanor; Holland, Anthony J; Owen, Adrian M; Roberts, Angela C
2003-10-22
Theories of incentive motivation attempt to capture the way in which objects and events in the world can acquire high motivational value and drive behavior, even in the absence of a clear biological need. In addition, for an individual to select the most appropriate goal, the incentive values of competing desirable objects need to be defined and compared. The present study examined the neural substrates by which appetitive incentive value influences prospective goal selection, using positron emission tomographic neuroimaging in humans. Sated subjects were shown a series of restaurant menus that varied in incentive value, specifically tailored for each individual, and in half the trials, were asked to make a selection from the menu. The amygdala was activated by high-incentive menus regardless of whether a choice was required. Indeed, activity in this region varied as a function of individual subjective ratings of incentive value. In contrast, distinct regions of the orbitofrontal cortex were recruited both during incentive judgments and goal selection. Activity in the medial orbital cortex showed a greater response to high-incentive menus and when making a choice, with the latter activity also correlating with subjective ratings of difficulty. Lateral orbitofrontal activity was observed selectively when participants had to suppress responses to alternative desirable items to select their most preferred. Taken together, these data highlight the differential contribution of the amygdala and regions within the orbitofrontal cortex in a neural system underlying the selection of goals based on the prospective incentive value of stimuli, over and above homeostatic influences.
Brain mechanisms of persuasion: how 'expert power' modulates memory and attitudes.
Klucharev, Vasily; Smidts, Ale; Fernández, Guillén
2008-12-01
Human behaviour is affected by various forms of persuasion. The general persuasive effect of high expertise of the communicator, often referred to as 'expert power', is well documented. We found that a single exposure to a combination of an expert and an object leads to a long-lasting positive effect on memory for and attitude towards the object. Using functional magnetic resonance imaging, we probed the neural processes predicting these behavioural effects. Expert context was associated with distributed left-lateralized brain activity in prefrontal and temporal cortices related to active semantic elaboration. Furthermore, experts enhanced subsequent memory effects in the medial temporal lobe (i.e. in hippocampus and parahippocampal gyrus) involved in memory formation. Experts also affected subsequent attitude effects in the caudate nucleus involved in trustful behaviour, reward processing and learning. These results may suggest that the persuasive effect of experts is mediated by modulation of caudate activity resulting in a re-evaluation of the object in terms of its perceived value. Results extend our view of the functional role of the dorsal striatum in social interaction and enable us to make the first steps toward a neuroscientific model of persuasion.
Image quality of a pixellated GaAs X-ray detector
NASA Astrophysics Data System (ADS)
Sun, G. C.; Makham, S.; Bourgoin, J. C.; Mauger, A.
2007-02-01
X-ray detection requires materials with large atomic numbers Z in order to absorb the radiation efficiently. In case of X-ray imaging, fluorescence is a limiting factor for the spatial resolution and contrast at energies above the kα threshold. Since both the energy and yield of the fluorescence of a given material increase with the atomic number, there is an optimum value of Z. GaAs, which can now be epitaxially grown as self-supported thick layers to fulfil the requirements for imaging (good homogeneity of the electronic properties) corresponds to this optimum. Image performances obtained with this material are evaluated in terms of line spread function and modulation transfer function, and a comparison with CsI is made. We evaluate the image contrast obtained for a given object contrast with GaAs and CsI detectors, in the photon energy range of medical applications. Finally, we discuss the minimum object size, which can be detected by these detectors in of mammography conditions. This demonstrates that an object of a given size can be detected using a GaAs detector with a dose at least 100 times lower than using a CsI detector.
Brain mechanisms of persuasion: how ‘expert power’ modulates memory and attitudes
Smidts, Ale; Fernández, Guillén
2008-01-01
Human behaviour is affected by various forms of persuasion. The general persuasive effect of high expertise of the communicator, often referred to as ’expert power’, is well documented. We found that a single exposure to a combination of an expert and an object leads to a long-lasting positive effect on memory for and attitude towards the object. Using functional magnetic resonance imaging, we probed the neural processes predicting these behavioural effects. Expert context was associated with distributed left-lateralized brain activity in prefrontal and temporal cortices related to active semantic elaboration. Furthermore, experts enhanced subsequent memory effects in the medial temporal lobe (i.e. in hippocampus and parahippocampal gyrus) involved in memory formation. Experts also affected subsequent attitude effects in the caudate nucleus involved in trustful behaviour, reward processing and learning. These results may suggest that the persuasive effect of experts is mediated by modulation of caudate activity resulting in a re-evaluation of the object in terms of its perceived value. Results extend our view of the functional role of the dorsal striatum in social interaction and enable us to make the first steps toward a neuroscientific model of persuasion. PMID:19015077
NASA Astrophysics Data System (ADS)
Usta, Metin; Tufan, Mustafa Çağatay
2017-11-01
The object of this work is to present the consequences for the stopping power and range values of some human tissues at energies ranging from 1 MeV to 1 GeV and 1-500 MeV, respectively. The considered human tissues are lung, intestine, skin, larynx, breast, bladder, prostate and ovary. In this work, the stopping power is calculated by considering the number of velocity-dependent effective charge and effective mean excitation energies of the target material. We used the Hartree-Fock-Roothaan (HFR) atomic wave function to determine the charge density and the continuous slowing down approximation (CSDA) method for the calculation of the proton range. Electronic stopping power values of tissues results have been compared with the ICRU 44, 46 reports, SRIM, Janni and CasP data over the percent error rate. Range values relate to tissues have compared the range results with the SRIM, FLUKA and Geant4 data. For electronic stopping power results, ICRU, SRIM and Janni's data indicated the best fit with our values at 1-50, 50-250 MeV and 250 MeV-1 GeV, respectively. For range results, the best accordance with the calculated values have been found the SRIM data and the error level is less than 10% in proton therapy. However, greater 30% errors were observed in the 250 MeV and over energies.
The Virtual Short Physical Performance Battery
Wrights, Abbie P.; Haakonssen, Eric H.; Dobrosielski, Meredith A.; Chmelo, Elizabeth A.; Barnard, Ryan T.; Pecorella, Anthony; Ip, Edward H.; Rejeski, W. Jack
2015-01-01
Background. Performance-based and self-report instruments of physical function are frequently used and provide complementary information. Identifying older adults with a mismatch between actual and perceived function has utility in clinical settings and in the design of interventions. Using novel, video-animated technology, the objective of this study was to develop a self-report measure that parallels the domains of objective physical function assessed by the Short Physical Performance Battery (SPPB)—the virtual SPPB (vSPPB). Methods. The SPPB, vSPPB, the self-report Pepper Assessment Tool for Disability, the Mobility Assessment Tool-short form, and a 400-m walk test were administered to 110 older adults (mean age = 80.6±5.2 years). One-week test–retest reliability of the vSPPB was examined in 30 participants. Results. The total SPPB (mean [±SD] = 7.7±2.8) and vSPPB (7.7±3.2) scores were virtually identical, yet moderately correlated (r = .601, p < .05). The component scores of the SPPB and vSPPB were also moderately correlated (all p values <.01). The vSPPB (intraclass correlation = .963, p < .05) was reliable; however, individuals with the lowest function overestimated their overall lower extremity function while participants of all functional levels overestimated their ability on chair stands, but accurately perceived their usual gait speed. Conclusion. In spite of the similarity between the SPPB and vSPPB, the moderate strength of the association between the two suggests that they offer unique perspectives on an older adult’s physical function. PMID:25829520
Correlation functions from a unified variational principle: Trial Lie groups
NASA Astrophysics Data System (ADS)
Balian, R.; Vénéroni, M.
2015-11-01
Time-dependent expectation values and correlation functions for many-body quantum systems are evaluated by means of a unified variational principle. It optimizes a generating functional depending on sources associated with the observables of interest. It is built by imposing through Lagrange multipliers constraints that account for the initial state (at equilibrium or off equilibrium) and for the backward Heisenberg evolution of the observables. The trial objects are respectively akin to a density operator and to an operator involving the observables of interest and the sources. We work out here the case where trial spaces constitute Lie groups. This choice reduces the original degrees of freedom to those of the underlying Lie algebra, consisting of simple observables; the resulting objects are labeled by the indices of a basis of this algebra. Explicit results are obtained by expanding in powers of the sources. Zeroth and first orders provide thermodynamic quantities and expectation values in the form of mean-field approximations, with dynamical equations having a classical Lie-Poisson structure. At second order, the variational expression for two-time correlation functions separates-as does its exact counterpart-the approximate dynamics of the observables from the approximate correlations in the initial state. Two building blocks are involved: (i) a commutation matrix which stems from the structure constants of the Lie algebra; and (ii) the second-derivative matrix of a free-energy function. The diagonalization of both matrices, required for practical calculations, is worked out, in a way analogous to the standard RPA. The ensuing structure of the variational formulae is the same as for a system of non-interacting bosons (or of harmonic oscillators) plus, at non-zero temperature, classical Gaussian variables. This property is explained by mapping the original Lie algebra onto a simpler Lie algebra. The results, valid for any trial Lie group, fulfill consistency properties and encompass several special cases: linear responses, static and time-dependent fluctuations, zero- and high-temperature limits, static and dynamic stability of small deviations.
Zheng, Lai; Ismail, Karim
2017-05-01
Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Do family physicians electronic health records support meaningful use?
Peterson, Lars E; Blackburn, Brenna; Ivins, Douglas; Mitchell, Jason; Matson, Christine; Phillips, Robert L
2015-03-01
Spurred by government incentives, the use of electronic health records (EHRs) in the United States has increased; however, whether these EHRs have the functionality necessary to meet meaningful use (MU) criteria remains unknown. Our objective was to characterize family physician access to MU functionality when using a MU-certified EHR. Data were obtained from a convenience survey of family physicians accessing their American Board of Family Medicine online portfolio in 2011. A brief survey queried MU functionality. We used descriptive statistics to characterize the responses and bivariate statistics to test associations between MU and patient communication functions by presence of a MU-certified EHR. Out of 3855 respondents, 60% reported having an EHR that supports MU. Physicians with MU-certified EHRs were more likely than physicians without MU-certified EHRs to report patient registry activities (49.7% vs. 32.3%, p-value<0.01), tracking quality measures (74.1% vs. 56.4%, p-value<0.01), access to labs or consultation notes, and electronic prescribing; but electronic communication abilities were low regardless of EHR capabilities. Family physicians with MU-certified EHRs are more likely to report MU functionality; however, a sizeable minority does not report MU functions. Many family physicians with MU-certified EHRs may not successfully meet the successively stringent MU criteria and may face significant upgrade costs to do so. Cross sectional survey. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mustak, S.
2013-09-01
The correction of atmospheric effects is very essential because visible bands of shorter wavelength are highly affected by atmospheric scattering especially of Rayleigh scattering. The objectives of the paper is to find out the haze values present in the all spectral bands and to correct the haze values for urban analysis. In this paper, Improved Dark Object Subtraction method of P. Chavez (1988) is applied for the correction of atmospheric haze in the Resoucesat-1 LISS-4 multispectral satellite image. Dark object Subtraction is a very simple image-based method of atmospheric haze which assumes that there are at least a few pixels within an image which should be black (% reflectance) and such black reflectance termed as dark object which are clear water body and shadows whose DN values zero (0) or Close to zero in the image. Simple Dark Object Subtraction method is a first order atmospheric correction but Improved Dark Object Subtraction method which tends to correct the Haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere. The haze values extracted using Simple Dark Object Subtraction method for Green band (Band2), Red band (Band3) and NIR band (band4) are 40, 34 and 18 but the haze values extracted using Improved Dark Object Subtraction method are 40, 18.02 and 11.80 for aforesaid bands. Here it is concluded that the haze values extracted by Improved Dark Object Subtraction method provides more realistic results than Simple Dark Object Subtraction method.
NASA Astrophysics Data System (ADS)
Ahmad, Naseer; Kamal, Shahid; Raza, Zulfiqar Ali; Hussain, Tanveer
2017-03-01
The present study investigated multi-response optimization of certain input parameters viz. concentrations of oil and water repellent finish (Oleophobol CP-C®), dimethylol dihydroxy ethylene urea based cross linking agent (Knittex FEL) and curing temperature on some mechanical, (i.e. tear and tensile strengths), functional (i.e., water contact angle ‘WCA’, oil contact angle ‘OCA’) and comfort (i.e. crease recovery angle ‘CRA’, air permeability ‘AP’, and stiffness) properties of an oleo-hydrophobic finished fabric under response surface methodology and the desirability function. The results have been examined using analysis of variance (ANOVA) and desirability function for the identification of optimum levels of input variables. The ANOVA was employed also to identify the percentage contribution of process factors. Under the optimized conditions, which were obtained with a total desirability value of 0.7769, the experimental values of Oleophobol CP-C® (O-CPC), Knittex FEL (K-FEL) and curing temperature (C-Temp) agreed closely with the predicted values. The optimized process parameters for maximum WCA (135°), OCA (129°), AP (290 m s-1), CRA (214°), tear (1492 gf) and tensile (764 N) strengths and minimum stiffness (3.2928 cm) were found to be: concentration of OCP-C as 44.44 g l-1, concentration of cross linker K-FEL as 32.07 g l-1 and C-Temp as 161.81 °C.
Büyükvural Şen, Sıdıka; Özbudak Demir, Sibel; Ekiz, Timur; Özgirgin, Neşe
2015-01-01
Objective: To evaluate the effects of the bilateral isokinetic strengthening training applied to knee and ankle muscles on balance, functional parameters, gait, and the quality of in stroke patients. Methods: Fifty patients (33 M, 17 F) with subacute-chronic stroke and 30 healthy subjects were included. Stroke patients were allocated into isokinetic and control groups. Conventional rehabilitation program was applied to all cases; additionally maximal concentric isokinetic strengthening training was applied to the knee-ankle muscles bilaterally to the isokinetic group 5 days a week for 3 weeks. Biodex System 3 Pro Multijoint System isokinetic dynamometer was used for isokinetic evaluation. The groups were assessed by Functional Independence Measure, Stroke Specific Quality of Life Scale, Timed 10-Meter Walk Test, Six-Minute Walk Test, Stair-Climbing Test, Timed up&go Test, Berg Balance Scale, and Rivermead Mobility Index. Results: Compared with baseline, the isokinetic PT values of the knee and ankle on both sides significantly increased in all cases. PT change values were significantly higher in the isokinetic group than the control group (P<0.025). Furthermore, the quality of life, gait, balance and mobility index values improved significantly in both groups, besides the increase levels were found significantly higher in the isokinetic group (P<0.025, P<0.05). Conclusion: Bilateral isokinetic strengthening training in addition to conventional rehabilitation program after stroke seems to be effective on strengthening muscles on both sides, improving functional parameters, gait, balance and life quality. PMID:26629238
Sumiyoshi, Tatsuaki; Shima, Yasuo; Okabayashi, Takehiro; Kozuki, Akihito; Hata, Yasuhiro; Noda, Yoshihiro; Kouno, Michihiko; Miyagawa, Kazuyuki; Tokorodani, Ryotaro; Saisaka, Yuichi; Tokumaru, Teppei; Nakamura, Toshio; Morita, Sojiro
2016-07-01
The objective of this study was to determine the utility of Tc-99m-diethylenetriamine-penta-acetic acid-galactosyl human serum albumin ((99m)Tc-GSA) single-photon emission computed tomography (SPECT)/CT fusion imaging for posthepatectomy remnant liver function assessment in hilar bile duct cancer patients. Thirty hilar bile duct cancer patients who underwent major hepatectomy with extrahepatic bile duct resection were retrospectively analyzed. Indocyanine green plasma clearance rate (KICG) value and estimated KICG by (99m)Tc-GSA scintigraphy (KGSA) and volumetric and functional rates of future remnant liver by (99m)Tc-GSA SPECT/CT fusion imaging were used to evaluate preoperative whole liver function and posthepatectomy remnant liver function, respectively. Remnant (rem) KICG (= KICG × volumetric rate) and remKGSA (= KGSA × functional rate) were used to predict future remnant liver function; major hepatectomy was considered unsafe for values <0.05. The correlation of remKICG and remKGSA with posthepatectomy mortality and morbidity was determined. Although remKICG and remKGSA were not significantly different (median value: 0.071 vs 0.075), functional rates of future remnant liver were significantly higher than volumetric rates (median: 0.54 vs 0.46; P < .001). Hepatectomy was considered unsafe in 17% and 0% of patients using remKICG and remKGSA, respectively. Postoperative liver failure and mortality did not occur in the patients for whom hepatectomy was considered unsafe based on remKICG. remKGSA showed a stronger correlation with postoperative prothrombin time activity than remKICG. (99m)Tc-GSA SPECT/CT fusion imaging enables accurate assessment of future remnant liver function and suitability for hepatectomy in hilar bile duct cancer patients. Copyright © 2016 Elsevier Inc. All rights reserved.
[Incidence of refractive errors with corrective aids subsequent selection].
Benes, P; Synek, S; Petrová, S; Sokolová, Sidlová J; Forýtková, L; Holoubková, Z
2012-02-01
This study follows the occurrence of refractive errors in population and the possible selection of the appropriate type of corrective aids. Objective measurement and subsequent determination of the subjective refraction of the eye is on essential act in opotmetric practice. The file represented by 615 patients (1230 eyes) is divided according to the refractive error of myopia, hyperopia and as a control group are listed emetropic clients. The results of objective and subjective values of refraction are compared and statistically processed. The study included 615 respondents. To determine the objective refraction the autorefraktokeratometer with Placido disc was used and the values of spherical and astigmatic correction components, including the axis were recorded. These measurements were subsequently verified and tested subjectively using the trial lenses and the projection optotype to the normal investigative distance of 5 meters. After this the appropriate corrective aids were then recommended. Group I consists of 123 men and 195 women with myopia (n = 635) of clients with an average age 39 +/- 18,9 years. Objective refraction - sphere: -2,57 +/- 2,46 D, cylinder: -1,1 +/- 1,01 D, axis of: 100 degrees +/- 53,16 degrees. Subjective results are as follows--the value of sphere: -2,28 +/- 2,33 D, cylinder -0,63 +/- 0,80 D, axis of: 99,8 degrees +/- 56,64 degrees. Group II is represented hyperopic clients and consists of 67 men and 107 women (n = 348). The average age is 58,84 +/- 16,73 years. Objective refraction has values - sphere: +2,81 +/- 2,21 D, cylinder: -1,0 +/- 0,94 D; axis 95 degree +/- 45,4 degrees. Subsequent determination of subjective refraction has the following results - sphere: +2,28 +/- 2,06 D; cylinder: -0,49 +/- 0,85 D, axis of: 95,9 degrees +/- 46,4 degrees. Group III consists from emetropes whose final minimum viasual acuity was Vmin = 1,0 (5/5) or better. Overall, this control group is represented 52 males and 71 females (n = 247). The average age was 43 +/- 18,73 years. Objective refraction - sphere: +0,32 +/- 0,45 D; cylinder: -0,51 +/- 0,28 D, axis of: 94,7 degrees +/- 57,5 degrees. Values of objective refraction take higher values than the subsequent execution of the subjective examination of the refractive error and recommendation of the appropriate type of corrective aids. This all is in examined groups and in the individual components of refractive errors. It also confirmed the hypothesis that the population outweighs with-the-rule astigmatism, the deployment of resources according to the literature ranges from 90 degrees +/- 10 degrees. The values observed correction of refractive errors are then derived also offer the most common prescription ranges and products for the correction of given ametropia. In the selection and design corrective aids, we are often limited. Our task is then to manufacture high quality, functional and aesthetic corrective aids, you need to connect knowledge from the fields of optics, optometry and ophthalmology. Faster visual rehabilitation simplifies clients' rapid return to everyday life.
Advanced Manufacturing and Value-added Products from US Agriculture
NASA Technical Reports Server (NTRS)
Villet, Ruxton H.; Child, Dennis R.; Acock, Basil
1992-01-01
An objective of the US Department of Agriculture (USDA) Agriculture Research Service (ARS) is to develop technology leading to a broad portfolio of value-added marketable products. Modern scientific disciplines such as chemical engineering are brought into play to develop processes for converting bulk commodities into high-margin products. To accomplish this, the extremely sophisticated processing devices which form the basis of modern biotechnology, namely, genes and enzymes, can be tailored to perform the required functions. The USDA/ARS is a leader in the development of intelligent processing equipment (IPE) for agriculture in the broadest sense. Applications of IPE are found in the production, processing, grading, and marketing aspects of agriculture. Various biotechnology applications of IPE are discussed.
Multiple utility constrained multi-objective programs using Bayesian theory
NASA Astrophysics Data System (ADS)
Abbasian, Pooneh; Mahdavi-Amiri, Nezam; Fazlollahtabar, Hamed
2018-03-01
A utility function is an important tool for representing a DM's preference. We adjoin utility functions to multi-objective optimization problems. In current studies, usually one utility function is used for each objective function. Situations may arise for a goal to have multiple utility functions. Here, we consider a constrained multi-objective problem with each objective having multiple utility functions. We induce the probability of the utilities for each objective function using Bayesian theory. Illustrative examples considering dependence and independence of variables are worked through to demonstrate the usefulness of the proposed model.
Schrank, Elisa S; Hitch, Lester; Wallace, Kevin; Moore, Richard; Stanhope, Steven J
2013-10-01
Passive-dynamic ankle-foot orthosis (PD-AFO) bending stiffness is a key functional characteristic for achieving enhanced gait function. However, current orthosis customization methods inhibit objective premanufacture tuning of the PD-AFO bending stiffness, making optimization of orthosis function challenging. We have developed a novel virtual functional prototyping (VFP) process, which harnesses the strengths of computer aided design (CAD) model parameterization and finite element analysis, to quantitatively tune and predict the functional characteristics of a PD-AFO, which is rapidly manufactured via fused deposition modeling (FDM). The purpose of this study was to assess the VFP process for PD-AFO bending stiffness. A PD-AFO CAD model was customized for a healthy subject and tuned to four bending stiffness values via VFP. Two sets of each tuned model were fabricated via FDM using medical-grade polycarbonate (PC-ISO). Dimensional accuracy of the fabricated orthoses was excellent (average 0.51 ± 0.39 mm). Manufacturing precision ranged from 0.0 to 0.74 Nm/deg (average 0.30 ± 0.36 Nm/deg). Bending stiffness prediction accuracy was within 1 Nm/deg using the manufacturer provided PC-ISO elastic modulus (average 0.48 ± 0.35 Nm/deg). Using an experimentally derived PC-ISO elastic modulus improved the optimized bending stiffness prediction accuracy (average 0.29 ± 0.57 Nm/deg). Robustness of the derived modulus was tested by carrying out the VFP process for a disparate subject, tuning the PD-AFO model to five bending stiffness values. For this disparate subject, bending stiffness prediction accuracy was strong (average 0.20 ± 0.14 Nm/deg). Overall, the VFP process had excellent dimensional accuracy, good manufacturing precision, and strong prediction accuracy with the derived modulus. Implementing VFP as part of our PD-AFO customization and manufacturing framework, which also includes fit customization, provides a novel and powerful method to predictably tune and precisely manufacture orthoses with objectively customized fit and functional characteristics.
Zavaglia, Melissa; Hilgetag, Claus C
2016-06-01
Spatial attention is a prime example for the distributed network functions of the brain. Lesion studies in animal models have been used to investigate intact attentional mechanisms as well as perspectives for rehabilitation in the injured brain. Here, we systematically analyzed behavioral data from cooling deactivation and permanent lesion experiments in the cat, where unilateral deactivation of the posterior parietal cortex (in the vicinity of the posterior middle suprasylvian cortex, pMS) or the superior colliculus (SC) cause a severe neglect in the contralateral hemifield. Counterintuitively, additional deactivation of structures in the opposite hemisphere reverses the deficit. Using such lesion data, we employed a game-theoretical approach, multi-perturbation Shapley value analysis (MSA), for inferring functional contributions and network interactions of bilateral pMS and SC from behavioral performance in visual attention studies. The approach provides an objective theoretical strategy for lesion inferences and allows a unique quantitative characterization of regional functional contributions and interactions on the basis of multi-perturbations. The quantitative analysis demonstrated that right posterior parietal cortex and superior colliculus made the strongest positive contributions to left-field orienting, while left brain regions had negative contributions, implying that their perturbation may reverse the effects of contralateral lesions or improve normal function. An analysis of functional modulations and interactions among the regions revealed redundant interactions (implying functional overlap) between regions within each hemisphere, and synergistic interactions between bilateral regions. To assess the reliability of the MSA method in the face of variable and incomplete input data, we performed a sensitivity analysis, investigating how much the contribution values of the four regions depended on the performance of specific configurations and on the prediction of unknown performances. The results suggest that the MSA approach is sensitive to categorical, but insensitive to gradual changes in the input data. Finally, we created a basic network model that was based on the known anatomical interactions among cortical-tectal regions and reproduced the experimentally observed behavior in visual orienting. We discuss the structural organization of the network model relative to the causal modulations identified by MSA, to aid a mechanistic understanding of the attention network of the brain.
Crichton, Georgina E; Elias, Merrill F; Dore, Gregory A; Torres, Rachael V; Robbins, Michael A
2014-11-01
The objective was to investigate the association between variability in blood pressure (BP) and cognitive function for sitting, standing, and reclining BP values and variability derived from all 15 measures. In previous studies, only sitting BP values have been examined, and only a few cognitive measures have been used. A secondary objective was to examine associations between BP variability and cognitive performance in hypertensive individuals stratified by treatment success. Cross-sectional analyses were performed on 972 participants of the Maine Syracuse Study for whom 15 serial BP clinic measures (5 sitting, 5 recumbent, and 5 standing) were obtained before testing of cognitive performance. Using all 15 measures, higher variability in systolic and diastolic BP was associated with poorer performance on multiple measures of cognitive performance, independent of demographic factors, cardiovascular risk factors, and pulse pressure. When sitting, reclining, and standing systolic BP values were compared, only variability in standing BP was related to measures of cognitive performance. However, for diastolic BP, variability in all 3 positions was related to cognitive performance. Mean BP values were weaker predictors of cognition. Furthermore, higher overall variability in both systolic and diastolic BP was associated with poorer cognitive performance in unsuccessfully treated hypertensive individuals (with BP ≥140/90 mm Hg), but these associations were not evident in those with controlled hypertension. © 2014 American Heart Association, Inc.
Feasibility study of parallel optical correlation-decoding analysis of lightning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Descour, M.R.; Sweatt, W.C.; Elliott, G.R.
The optical correlator described in this report is intended to serve as an attention-focusing processor. The objective is to narrowly bracket the range of a parameter value that characterizes the correlator input. The input is a waveform collected by a satellite-borne receiver. In the correlator, this waveform is simultaneously correlated with an ensemble of ionosphere impulse-response functions, each corresponding to a different total-electron-count (TEC) value. We have found that correlation is an effective method of bracketing the range of TEC values likely to be represented by the input waveform. High accuracy in a computational sense is not required of themore » correlator. Binarization of the impulse-response functions and the input waveforms prior to correlation results in a lower correlation-peak-to-background-fluctuation (signal-to-noise) ratio than the peak that is obtained when all waveforms retain their grayscale values. The results presented in this report were obtained by means of an acousto-optic correlator previously developed at SNL as well as by simulation. An optical-processor architecture optimized for 1D correlation of long waveforms characteristic of this application is described. Discussions of correlator components, such as optics, acousto-optic cells, digital micromirror devices, laser diodes, and VCSELs are included.« less
Intellectual Ability in Young Adulthood as an Antecedent of Physical Functioning in Older Age
Poranen-Clark, Taina; von Bonsdorff, Mikaela B.; Törmäkangas, Timo; Lahti, Jari; Wasenius, Niko; Räikkönen, Katri; Osmond, Clive; Salonen, Minna K.; Rantanen, Taina; Kajantie, Eero; Eriksson, Johan G.
2016-01-01
Objectives Low cognitive ability is associated with subsequent functional disability. Whether this association extends across adult life has been little studied. The aim of this study was to examine the association between intellectual ability in young adulthood and physical functioning during a 10-year follow-up in older age. Methods 360 persons of the Helsinki Birth Cohort Study (HBCS) male members, born between 1934- 1944 and residing in Finland in 1971, took part in The Finnish Defence Forces Basic Intellectual Ability Test during the first two weeks of their military service training between 1952- 72. Their physical functioning was assessed twice using the Short Form 36 (SF-36) questionnaire at average ages of 61 and 71 years. A longitudinal path model linking Intellectual Ability Test score to the physical functioning assessments was used to explore the effect of intellectual ability in young adulthood on physical functioning in older age. Results After adjustments for age at measurement, childhood socioeconomic status and adult BMI (kg/m2), better intellectual ability total and arithmetic and verbal reasoning subtest scores in young adulthood predicted better physical functioning at age 61 years (P-values < 0.021). Intellectual ability total and arithmetic and verbal reasoning subtest scores in young adulthood had indirect effects on physical functioning at age 71 years (P-values < 0.022) through better physical functioning at age 61 years. Adjustment for main chronic diseases did not change the results materially. Conclusion Better early life intellectual ability helps in maintaining better physical functioning in older age. PMID:27189726
A framework for quantifying and optimizing the value of seismic monitoring of infrastructure
NASA Astrophysics Data System (ADS)
Omenzetter, Piotr
2017-04-01
This paper outlines a framework for quantifying and optimizing the value of information from structural health monitoring (SHM) technology deployed on large infrastructure, which may sustain damage in a series of earthquakes (the main and the aftershocks). The evolution of the damage state of the infrastructure without or with SHM is presented as a time-dependent, stochastic, discrete-state, observable and controllable nonlinear dynamical system. The pre-posterior Bayesian analysis and the decision tree are used for quantifying and optimizing the value of SHM information. An optimality problem is then formulated how to decide on the adoption of SHM and how to manage optimally the usage and operations of the possibly damaged infrastructure and its repair schedule using the information from SHM. The objective function to minimize is the expected total cost or risk.
Dynamic cellular manufacturing system considering machine failure and workload balance
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad
2018-02-01
Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.
Modeling Acceleration of a System of Two Objects Using the Concept of Limits
NASA Astrophysics Data System (ADS)
Sokolowski, Andrzej
2018-01-01
Traditional school laboratory exercises on a system of moving objects connected by strings involve deriving expressions for the system acceleration, a = (∑F )/m, and sketching a graph of acceleration vs. force. While being in the form of rational functions, these expressions present great opportunities for broadening the scope of the analysis by using a more sophisticated math apparatus—the concept of limits. Using the idea of limits allows for extending both predictions and explanations of this type of motion that are—according to Redish—essential goals of teaching physics. This type of analysis, known in physics as limiting case analysis, allows for generalizing inferences by evaluating or estimating values of algebraic functions based on its extreme inputs. In practice, such transition provides opportunities for deriving valid conclusions for cases when direct laboratory measurements are not possible. While using limits is common for scientists, the idea of applying limits in school practice is not visible, and testing students' ability in this area is also rare.
Efficient sensitivity analysis and optimization of a helicopter rotor
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Chopra, Inderjit
1989-01-01
Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.
Satomura, Hironori; Adachi, Kohei
2013-07-01
To facilitate the interpretation of canonical correlation analysis (CCA) solutions, procedures have been proposed in which CCA solutions are orthogonally rotated to a simple structure. In this paper, we consider oblique rotation for CCA to provide solutions that are much easier to interpret, though only orthogonal rotation is allowed in the existing formulations of CCA. Our task is thus to reformulate CCA so that its solutions have the freedom of oblique rotation. Such a task can be achieved using Yanai's (Jpn. J. Behaviormetrics 1:46-54, 1974; J. Jpn. Stat. Soc. 11:43-53, 1981) generalized coefficient of determination for the objective function to be maximized in CCA. The resulting solutions are proved to include the existing orthogonal ones as special cases and to be rotated obliquely without affecting the objective function value, where ten Berge's (Psychometrika 48:519-523, 1983) theorems on suborthonormal matrices are used. A real data example demonstrates that the proposed oblique rotation can provide simple, easily interpreted CCA solutions.
The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.
Olivier, Brett G; Bergmann, Frank T
2015-09-04
Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).
The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.
Olivier, Brett G; Bergmann, Frank T
2015-06-01
Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).
SAIL: Summation-bAsed Incremental Learning for Information-Theoretic Text Clustering.
Cao, Jie; Wu, Zhiang; Wu, Junjie; Xiong, Hui
2013-04-01
Information-theoretic clustering aims to exploit information-theoretic measures as the clustering criteria. A common practice on this topic is the so-called Info-Kmeans, which performs K-means clustering with KL-divergence as the proximity function. While expert efforts on Info-Kmeans have shown promising results, a remaining challenge is to deal with high-dimensional sparse data such as text corpora. Indeed, it is possible that the centroids contain many zero-value features for high-dimensional text vectors, which leads to infinite KL-divergence values and creates a dilemma in assigning objects to centroids during the iteration process of Info-Kmeans. To meet this challenge, in this paper, we propose a Summation-bAsed Incremental Learning (SAIL) algorithm for Info-Kmeans clustering. Specifically, by using an equivalent objective function, SAIL replaces the computation of KL-divergence by the incremental computation of Shannon entropy. This can avoid the zero-feature dilemma caused by the use of KL-divergence. To improve the clustering quality, we further introduce the variable neighborhood search scheme and propose the V-SAIL algorithm, which is then accelerated by a multithreaded scheme in PV-SAIL. Our experimental results on various real-world text collections have shown that, with SAIL as a booster, the clustering performance of Info-Kmeans can be significantly improved. Also, V-SAIL and PV-SAIL indeed help improve the clustering quality at a lower cost of computation.
Prepositioning emergency supplies under uncertainty: a parametric optimization method
NASA Astrophysics Data System (ADS)
Bai, Xuejie; Gao, Jinwu; Liu, Yankui
2018-07-01
Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.
Park, Jae-Hyeong; Choi, Jin-Oh; Park, Seung Woo; Cho, Goo-Yeong; Oh, Jin Kyung; Lee, Jae-Hwan; Seong, In-Whan
2018-02-01
Right ventricular (RV) strain values by 2-dimensional strain echocardiography (STE) can be used as objective markers of RV systolic function. However, there is little data about normal reference RV strain values according to age and gender. We measured normal RV strain values by STE. RV strain values were analyzed from the digitally stored echocardiographic images from NORMAL (Normal echOcardiogRaphic diMensions and functions in KoreAn popuLation) study for the measurement of normal echocardiographic values performed in 23 Korean university hospitals. We enrolled total 1003 healthy persons in the NORMAL study. Of them, we analyzed 2-dimensional RV strain values in 493 subjects (261 females, mean 47 ± 15 years old) only with echocardiographic images by GE machines. Their LV systolic and diastolic functions were normal. RV fractional area change was 48 ± 6% and tricuspid annular plane systolic excursion was 23 ± 3 mm. Total RV global longitudinal peak systolic strain (RVGLS total ) was -21.5 ± 3.2%. Females had higher absolute RVGLS total (-22.3 ± 3.3 vs -20.7 ± 2.9%, p < 0.001) than males. Younger (<50 years old) females had higher absolute RVGLS total (-22.9 ± 3.2 vs -20.5 ± 2.8%, p < 0.001) than age matched males. RVGLS total in females gradually increased according to age (p for trend = 0.002) and becomes almost similar in age ≥50 years. However, this trend was not seen in males (p for trend = 0.287), and younger males had similar RVGLS total value to that of older males (age ≥50 years, -20.5 ± 2.8 vs -20.9 ± 3.1%, p = 0.224). We calculated normal RVGLS values in normal population. Females have higher absolute strain values than males, especially in younger age groups (<50 years old).
Joint Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelievre, P. G.; Bijani, R.; Farquharson, C. G.
2015-12-01
Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.
NASA Astrophysics Data System (ADS)
Abdeh-Kolahchi, A.; Satish, M.; Datta, B.
2004-05-01
A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.
Moore, H.J.; Boyce, J.M.; Hahn, D.A.
1980-01-01
Apparently, there are two types of size-frequency distributions of small lunar craters (???1-100 m across): (1) crater production distributions for which the cumulative frequency of craters is an inverse function of diameter to power near 2.8, and (2) steady-state distributions for which the cumulative frequency of craters is inversely proportional to the square of their diameters. According to theory, cumulative frequencies of craters in each morphologic category within the steady-state should also be an inverse function of the square of their diameters. Some data on frequency distribution of craters by morphologic types are approximately consistent with theory, whereas other data are inconsistent with theory. A flux of crater producing objects can be inferred from size-frequency distributions of small craters on the flanks and ejecta of craters of known age. Crater frequency distributions and data on the craters Tycho, North Ray, Cone, and South Ray, when compared with the flux of objects measured by the Apollo Passive Seismometer, suggest that the flux of objects has been relatively constant over the last 100 m.y. (within 1/3 to 3 times of the flux estimated for Tycho). Steady-state frequency distributions for craters in several morphologic categories formed the basis for estimating the relative ages of craters and surfaces in a system used during the Apollo landing site mapping program of the U.S. Geological Survey. The relative ages in this system are converted to model absolute ages that have a rather broad range of values. The range of values of the absolute ages are between about 1/3 to 3 times the assigned model absolute age. ?? 1980 D. Reidel Publishing Co.
Production of biosolid fuels from municipal sewage sludge: Technical and economic optimisation.
Wzorek, Małgorzata; Tańczuk, Mariusz
2015-08-01
The article presents the technical and economic analysis of the production of fuels from municipal sewage sludge. The analysis involved the production of two types of fuel compositions: sewage sludge with sawdust (PBT fuel) and sewage sludge with meat and bone meal (PBM fuel). The technology of the production line of these sewage fuels was proposed and analysed. The main objective of the study is to find the optimal production capacity. The optimisation analysis was performed for the adopted technical and economic parameters under Polish conditions. The objective function was set as a maximum of the net present value index and the optimisation procedure was carried out for the fuel production line input capacity from 0.5 to 3 t h(-1), using the search step 0.5 t h(-1). On the basis of technical and economic assumptions, economic efficiency indexes of the investment were determined for the case of optimal line productivity. The results of the optimisation analysis show that under appropriate conditions, such as prices of components and prices of produced fuels, the production of fuels from sewage sludge can be profitable. In the case of PBT fuel, calculated economic indexes show the best profitability for the capacity of a plant over 1.5 t h(-1) output, while production of PBM fuel is beneficial for a plant with the maximum of searched capacities: 3.0 t h(-1). Sensitivity analyses carried out during the investigation show that influence of both technical and economic assessments on the location of maximum of objective function (net present value) is significant. © The Author(s) 2015.
Han, Yuliang; Wang, Kai; Jia, Jianjun; Wu, Weiping
2017-01-01
Object-location memory is particularly fragile and specifically impaired in Alzheimer's disease (AD) patients. Electroencephalogram (EEG) was utilized to objectively measure memory impairment for memory formation correlates of EEG oscillatory activities. We aimed to construct an object-location memory paradigm and explore EEG signs of it. Two groups of 20 probable mild AD patients and 19 healthy older adults were included in a cross-sectional analysis. All subjects took an object-location memory task. EEG recordings performed during object-location memory tasks were compared between the two groups in the two EEG parameters (spectral parameters and phase synchronization). The memory performance of AD patients was worse than that of healthy elderly adults The power of object-location memory of the AD group was significantly higher than the NC group (healthy elderly adults) in the alpha band in the encoding session, and alpha and theta bands in the retrieval session. The channels-pairs the phase lag index value of object-location memory in the AD group was clearly higher than the NC group in the delta, theta, and alpha bands in encoding sessions and delta and theta bands in retrieval sessions. The results provide support for the hypothesis that the AD patients may use compensation mechanisms to remember the items and episode.
Pareto-front shape in multiobservable quantum control
NASA Astrophysics Data System (ADS)
Sun, Qiuyang; Wu, Re-Bing; Rabitz, Herschel
2017-03-01
Many scenarios in the sciences and engineering require simultaneous optimization of multiple objective functions, which are usually conflicting or competing. In such problems the Pareto front, where none of the individual objectives can be further improved without degrading some others, shows the tradeoff relations between the competing objectives. This paper analyzes the Pareto-front shape for the problem of quantum multiobservable control, i.e., optimizing the expectation values of multiple observables in the same quantum system. Analytic and numerical results demonstrate that with two commuting observables the Pareto front is a convex polygon consisting of flat segments only, while with noncommuting observables the Pareto front includes convexly curved segments. We also assess the capability of a weighted-sum method to continuously capture the points along the Pareto front. Illustrative examples with realistic physical conditions are presented, including NMR control experiments on a 1H-13C two-spin system with two commuting or noncommuting observables.
NASA Astrophysics Data System (ADS)
Cannella, Marco; Sciuto, Salvatore Andrea
2001-04-01
An evaluation of errors for a method for determination of trajectories and velocities of supersonic objects is conducted. The analytical study of a cluster, composed of three pressure transducers and generally used as an apparatus for cinematic determination of parameters of supersonic objects, is developed. Furthermore, detailed investigation into the accuracy of this cluster on determination of the slope of an incoming shock wave is carried out for optimization of the device. In particular, a specific non-dimensional parameter is proposed in order to evaluate accuracies for various values of parameters and reference graphs are provided in order to properly design the sensor cluster. Finally, on the basis of the error analysis conducted, a discussion on the best estimation of the relative distance for the sensor as a function of temporal resolution of the measuring system is presented.
Value-Focused Objectives Model for Community Resilience
2014-04-01
Value-focused objectives model for community resilience : Final report Prepared by: Jay Adamsson CAE Integrated Enterprise Solutions...2014 Value-Focused Objectives Model for Community Resilience Final Report 24 March 2014 – iv – 5606-002 Version 01 T A B L E O F C O N T E... Community Resilience ......................................................... 13 APPENDIX A LIST OF ACRONYMS
The Neurobiology of Reference-Dependent Value Computation
De Martino, Benedetto; Kumaran, Dharshan; Holt, Beatrice; Dolan, Raymond J.
2009-01-01
A key focus of current research in neuroeconomics concerns how the human brain computes value. Although, value has generally been viewed as an absolute measure (e.g., expected value, reward magnitude), much evidence suggests that value is more often computed with respect to a changing reference point, rather than in isolation. Here, we present the results of a study aimed to dissociate brain regions involved in reference-independent (i.e., “absolute”) value computations, from those involved in value computations relative to a reference point. During functional magnetic resonance imaging, subjects acted as buyers and sellers during a market exchange of lottery tickets. At a behavioral level, we demonstrate that subjects systematically accorded a higher value to objects they owned relative to those they did not, an effect that results from a shift in reference point (i.e., status quo bias or endowment effect). Our results show that activity in orbitofrontal cortex and dorsal striatum track parameters such as the expected value of lottery tickets indicating the computation of reference-independent value. In contrast, activity in ventral striatum indexed the degree to which stated prices, at a within-subjects and between-subjects level, were distorted with respect to a reference point. The findings speak to the neurobiological underpinnings of reference dependency during real market value computations. PMID:19321780
Multiobjective constraints for climate model parameter choices: Pragmatic Pareto fronts in CESM1
NASA Astrophysics Data System (ADS)
Langenbrunner, B.; Neelin, J. D.
2017-09-01
Global climate models (GCMs) are examples of high-dimensional input-output systems, where model output is a function of many variables, and an update in model physics commonly improves performance in one objective function (i.e., measure of model performance) at the expense of degrading another. Here concepts from multiobjective optimization in the engineering literature are used to investigate parameter sensitivity and optimization in the face of such trade-offs. A metamodeling technique called cut high-dimensional model representation (cut-HDMR) is leveraged in the context of multiobjective optimization to improve GCM simulation of the tropical Pacific climate, focusing on seasonal precipitation, column water vapor, and skin temperature. An evolutionary algorithm is used to solve for Pareto fronts, which are surfaces in objective function space along which trade-offs in GCM performance occur. This approach allows the modeler to visualize trade-offs quickly and identify the physics at play. In some cases, Pareto fronts are small, implying that trade-offs are minimal, optimal parameter value choices are more straightforward, and the GCM is well-functioning. In all cases considered here, the control run was found not to be Pareto-optimal (i.e., not on the front), highlighting an opportunity for model improvement through objectively informed parameter selection. Taylor diagrams illustrate that these improvements occur primarily in field magnitude, not spatial correlation, and they show that specific parameter updates can improve fields fundamental to tropical moist processes—namely precipitation and skin temperature—without significantly impacting others. These results provide an example of how basic elements of multiobjective optimization can facilitate pragmatic GCM tuning processes.
NASA Astrophysics Data System (ADS)
Bandte, Oliver
It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.
NASA Astrophysics Data System (ADS)
Bańka, Piotr; Badura, Henryk; Wesołowski, Marek
2017-11-01
One of the ways to protect objects exposed to the influences of mining exploitation is establishing protective pillars for them. Properly determined pillar provides effective protection of the object for which it was established. Determining correct dimensions of a pillar requires taking into account contradictory requirements. Protection measures against the excessive influences of mining exploitation require designing the largest possible pillars, whereas economic requirements suggest a maximum reduction of the size of resources left in the pillar. This paper presents algorithms and programs developed for determining optimal dimensions of protective pillars for surface objects and shafts. The issue of designing a protective pillar was treated as a nonlinear programming task. The objective function are the resources left in a pillar while nonlinear limitations are the deformation values evoked by the mining exploitation. Resources in the pillar may be weighted e.g. by calorific value or by the inverse of output costs. The possibility of designing pillars of any polygon shape was taken into account. Because of the applied exploitation technologies the rectangular pillar shape should be considered more advantageous than the oval one, though it does not ensure the minimization of resources left in a pillar. In this article there is also presented a different approach to the design of protective pillars, which instead of fixing the pillar boundaries in subsequent seams, the length of longwall panels of the designed mining exploitation is limited in a way that ensures the effective protection of an object while maximizing the extraction ratio of the deposit.
Industry/University/Government partnerships in metrology: A new paradigm for the future
NASA Astrophysics Data System (ADS)
Helms, C. R.
1998-11-01
A business process is described where Industry/University/Government interactions are optimized for highest productivity across these three sectors. This cross-functional approach provides for the rapid development of differentiated products for competitive advantage in industry, best of class scholarship and academically free university research, and the assurance of U.S. economic and military strength. The major focus of this paper will be R&D. However, the above objectives will only be met if effective transition from R&D into final product marketing, design, and manufacturing are included as an additional required concurrent, cross-functional activity. Metrology will be shown as an area that meets all the requirements for the development of a broad cross-functional partnership between industry, academia, and the Government that creates significant value for each sector.
NASA Astrophysics Data System (ADS)
Rishi, Rahul; Choudhary, Amit; Singh, Ravinder; Dhaka, Vijaypal Singh; Ahlawat, Savita; Rao, Mukta
2010-02-01
In this paper we propose a system for classification problem of handwritten text. The system is composed of preprocessing module, supervised learning module and recognition module on a very broad level. The preprocessing module digitizes the documents and extracts features (tangent values) for each character. The radial basis function network is used in the learning and recognition modules. The objective is to analyze and improve the performance of Multi Layer Perceptron (MLP) using RBF transfer functions over Logarithmic Sigmoid Function. The results of 35 experiments indicate that the Feed Forward MLP performs accurately and exhaustively with RBF. With the change in weight update mechanism and feature-drawn preprocessing module, the proposed system is competent with good recognition show.
Nikolaidis, Lazaros; Memon, Nabeel; O'Murchu, Brian
2015-02-01
We describe the case of a 54-year-old man who presented with exertional dyspnea and fatigue that had worsened over the preceding 2 years, despite a normally functioning bioprosthetic aortic valve and stable, mild left ventricular dysfunction (left ventricular ejection fraction, 0.45). His symptoms could not be explained by physical examination, an extensive biochemical profile, or multiple cardiac and pulmonary investigations. However, abnormal cardiopulmonary exercise test results and a right heart catheterization-combined with the use of a symptom-limited, bedside bicycle ergometer-revealed that the patient's exercise-induced pulmonary artery hypertension was out of proportion to his compensated left heart disease. A trial of sildenafil therapy resulted in objective improvements in hemodynamic values and functional class.
Online geometric calibration of cone-beam computed tomography for arbitrary imaging objects.
Meng, Yuanzheng; Gong, Hui; Yang, Xiaoquan
2013-02-01
A novel online method based on the symmetry property of the sum of projections (SOP) is proposed to obtain the geometric parameters in cone-beam computed tomography (CBCT). This method requires no calibration phantom and can be used in circular trajectory CBCT with arbitrary cone angles. An objective function is deduced to illustrate the dependence of the symmetry of SOP on geometric parameters, which will converge to its minimum when the geometric parameters achieve their true values. Thus, by minimizing the objective function, we can obtain the geometric parameters for image reconstruction. To validate this method, numerical phantom studies with different noise levels are simulated. The results show that our method is insensitive to the noise and can determine the skew (in-plane rotation angle of the detector), the roll (rotation angle around the projection of the rotation axis on the detector), and the rotation axis with high accuracy, while the mid-plane and source-to-detector distance will be obtained with slightly lower accuracy. However, our simulation studies validate that the errors of the latter two parameters brought by our method will hardly degrade the quality of reconstructed images. The small animal studies show that our method is able to deal with arbitrary imaging objects. In addition, the results of the reconstructed images in different slices demonstrate that we have achieved comparable image quality in the reconstructions as some offline methods.
Zhao, Yuanfang; Li, Jingguang; Liu, Xiqin; Song, Yiying; Wang, Ruosi; Yang, Zetian; Liu, Jia
2016-08-01
Individuals with developmental prosopagnosia (DP) exhibit severe difficulties in recognizing faces and to a lesser extent, also exhibit difficulties in recognizing non-face objects. We used fMRI to investigate whether these behavioral deficits could be accounted for by altered spontaneous neural activity. Two aspects of spontaneous neural activity were measured: the intensity of neural activity in a voxel indexed by the fractional amplitude of spontaneous low-frequency fluctuations (fALFF), and the connectivity of a voxel to neighboring voxels indexed by regional homogeneity (ReHo). Compared with normal adults, both the fALFF and ReHo values within the right occipital face area (rOFA) were significantly reduced in DP subjects. Follow-up studies on the normal adults revealed that these two measures indicated further functional division of labor within the rOFA. The fALFF in the rOFA was positively correlated with behavioral performance in recognition of non-face objects, whereas ReHo in the rOFA was positively correlated with processing of faces. When considered together, the altered fALFF and ReHo within the same region (rOFA) may account for the comorbid deficits in both face and object recognition in DPs, whereas the functional division of labor in these two measures helps to explain the relative independency of deficits in face recognition and object recognition in DP. Copyright © 2016 Elsevier Ltd. All rights reserved.
Birnbaum, Marvin L; Daily, Elaine K; O'Rourke, Ann P
2016-04-01
The principal goal of research relative to disasters is to decrease the risk that a hazard will result in a disaster. Disaster studies pursue two distinct directions: (1) epidemiological (non-interventional); and (2) interventional. Both interventional and non-interventional studies require data/information obtained from assessments of function. Non-interventional studies examine the epidemiology of disasters. Interventional studies evaluate specific interventions/responses in terms of their effectiveness in meeting their respective objectives, their contribution to the overarching goal, other effects created, their respective costs, and the efficiency with which they achieved their objectives. The results of interventional studies should contribute to evidence that will be used to inform the decisions used to define standards of care and best practices for a given setting based on these standards. Interventional studies are based on the Disaster Logic Model (DLM) and are used to change or maintain levels of function (LOFs). Relief and Recovery interventional studies seek to determine the effects, outcomes, impacts, costs, and value of the intervention provided after the onset of a damaging event. The Relief/Recovery Framework provides the structure needed to systematically study the processes involved in providing relief or recovery interventions that result in a new LOF for a given Societal System and/or its component functions. It consists of the following transformational processes (steps): (1) identification of the functional state prior to the onset of the event (pre-event); (2) assessments of the current functional state; (3) comparison of the current functional state with the pre-event state and with the results of the last assessment; (4) needs identification; (5) strategic planning, including establishing the overall strategic goal(s), objectives, and priorities for interventions; (6) identification of options for interventions; (7) selection of the most appropriate intervention(s); (8) operational planning; (9) implementation of the intervention(s); (10) assessments of the effects and changes in LOFs resulting from the intervention(s); (11) determination of the costs of providing the intervention; (12) determination of the current functional status; (13) synthesis of the findings with current evidence to define the benefits and value of the intervention to the affected population; and (14) codification of the findings into new evidence. Each of these steps in the Framework is a production function that facilitates evaluation, and the outputs of the transformation process establish the current state for the next step in the process. The evidence obtained is integrated into augmenting the respective Response Capacities of a community-at-risk. The ultimate impact of enhanced Response Capacity is determined by studying the epidemiology of the next event.
Madurga-Revilla, P; López-Pisón, J; Samper-Villagrasa, P; Garcés-Gómez, R; García-Íñiguez, J P; Domínguez-Cajal, M; Gil-Hernández, I; Viscor-Zárate, S
2017-11-01
Functional health, a reliable parameter of the impact of disease, should be used systematically to assess prognosis in paediatric intensive care units (PICU). Developing scales for the assessment of functional health is therefore essential. The Paediatric Overall and Cerebral Performance Category (POPC, PCPC) scales have traditionally been used in paediatric studies. The new Functional Status Scale (FSS) was designed to provide more objective results. This study aims to confirm the validity of the FSS compared to the classic POPC and PCPC scales, and to evaluate whether it may also be superior to the latter in assessing of neurological function. We conducted a retrospective descriptive study of 266 children with neurological diseases admitted to intensive care between 2012 and 2014. Functional health at discharge and at one year after discharge was evaluated using the PCPC and POPC scales and the new FSS. Global FSS scores were found to be well correlated with all POPC scores (P<.001), except in category 5 (coma/vegetative state). Global FSS score dispersion increases with POPC category. The neurological versions of both scales show a similar correlation. Comparison with classic POPC and PCPC categories suggests that the new FSS scale is a useful method for evaluating functional health in our setting. The dispersion of FSS values underlines the poor accuracy of POPC-PCPC compared to the new FSS scale, which is more disaggregated and objective. Copyright © 2017 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
Object detection with a multistatic array using singular value decomposition
Hallquist, Aaron T.; Chambers, David H.
2014-07-01
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across a surface and that travels down the surface. The detection system converts the return signals from a time domain to a frequency domain, resulting in frequency return signals. The detection system then performs a singular value decomposition for each frequency to identify singular values for each frequency. The detection system then detects the presence of a subsurface object based on a comparison of the identified singular values to expected singular values when no subsurface object is present.
NASA Astrophysics Data System (ADS)
Schramm, Stefan; Schikowski, Patrick; Lerm, Elena; Kaeding, André; Haueisen, Jens; Baumgarten, Daniel
2016-07-01
Objective measurement of straylight in the human eye with a Shack-Hartmann (SH) wavefront aberrometer is limited in imaging angle. We propose a measurement principle and a point spread function (PSF) reconstruction algorithm to overcome this limitation. In our optical setup, a variable stop replaces the stop conventionally used to suppress reflections and scatter in SH aberrometers. We record images with 21 diameters of the stop. From each SH image, the average intensity of the pupil is computed and normalized. The intensities represent integral values of the PSF. We reconstruct the PSF, which is the derivative of the intensities with respect to the visual angle. A modified Stiles Holladay approximation is fitted to the reconstructed PSF, resulting in a straylight parameter. A proof-of-principle study was carried out on eight healthy young volunteers. Scatter filters were positioned in front of the volunteers' eyes to simulate straylight. The straylight parameter was compared to the C-Quant measurements and the filter values. The PSF parameter shows strong correlation with the density of the filters and a linear relation to the C-Quant straylight parameter. Our measurement and reconstruction techniques allow for objective straylight analysis of visual angles up to 4 deg.
NASA Astrophysics Data System (ADS)
Pasam, Gopi Krishna; Manohar, T. Gowri
2016-09-01
Determination of available transfer capability (ATC) requires the use of experience, intuition and exact judgment in order to meet several significant aspects in the deregulated environment. Based on these points, this paper proposes two heuristic approaches to compute ATC. The first proposed heuristic algorithm integrates the five methods known as continuation repeated power flow, repeated optimal power flow, radial basis function neural network, back propagation neural network and adaptive neuro fuzzy inference system to obtain ATC. The second proposed heuristic model is used to obtain multiple ATC values. Out of these, a specific ATC value will be selected based on a number of social, economic, deregulated environmental constraints and related to specific applications like optimization, on-line monitoring, and ATC forecasting known as multi-objective decision based optimal ATC. The validity of results obtained through these proposed methods are scrupulously verified on various buses of the IEEE 24-bus reliable test system. The results presented and derived conclusions in this paper are very useful for planning, operation, maintaining of reliable power in any power system and its monitoring in an on-line environment of deregulated power system. In this way, the proposed heuristic methods would contribute the best possible approach to assess multiple objective ATC using integrated methods.
System control of an autonomous planetary mobile spacecraft
NASA Technical Reports Server (NTRS)
Dias, William C.; Zimmerman, Barbara A.
1990-01-01
The goal is to suggest the scheduling and control functions necessary for accomplishing mission objectives of a fairly autonomous interplanetary mobile spacecraft, while maximizing reliability. Goals are to provide an extensible, reliable system conservative in its use of on-board resources, while getting full value from subsystem autonomy, and avoiding the lure of ground micromanagement. A functional layout consisting of four basic elements is proposed: GROUND and SYSTEM EXECUTIVE system functions and RESOURCE CONTROL and ACTIVITY MANAGER subsystem functions. The system executive includes six subfunctions: SYSTEM MANAGER, SYSTEM FAULT PROTECTION, PLANNER, SCHEDULE ADAPTER, EVENT MONITOR and RESOURCE MONITOR. The full configuration is needed for autonomous operation on Moon or Mars, whereas a reduced version without the planning, schedule adaption and event monitoring functions could be appropriate for lower-autonomy use on the Moon. An implementation concept is suggested which is conservative in use of system resources and consists of modules combined with a network communications fabric. A language concept termed a scheduling calculus for rapidly performing essential on-board schedule adaption functions is introduced.
The Michelson Stellar Interferometer Error Budget for Triple Triple-Satellite Configuration
NASA Technical Reports Server (NTRS)
Marathay, Arvind S.; Shiefman, Joe
1996-01-01
This report presents the results of a study of the instrumentation tolerances for a conventional style Michelson stellar interferometer (MSI). The method used to determine the tolerances was to determine the change, due to the instrument errors, in the measured fringe visibility and phase relative to the ideal values. The ideal values are those values of fringe visibility and phase that would be measured by a perfect MSI and are attributable solely to the object being detected. Once the functional relationship for changes in visibility and phase as a function of various instrument errors is understood it is then possible to set limits on the instrument errors in order to ensure that the measured visibility and phase are different from the ideal values by no more than some specified amount. This was done as part of this study. The limits we obtained are based on a visibility error of no more than 1% and a phase error of no more than 0.063 radians (this comes from 1% of 2(pi) radians). The choice of these 1% limits is supported in the literture. The approach employed in the study involved the use of ASAP (Advanced System Analysis Program) software provided by Breault Research Organization, Inc., in conjunction with parallel analytical calculations. The interferometer accepts object radiation into two separate arms each consisting of an outer mirror, an inner mirror, a delay line (made up of two moveable mirrors and two static mirrors), and a 10:1 afocal reduction telescope. The radiation coming out of both arms is incident on a slit plane which is opaque with two openings (slits). One of the two slits is centered directly under one of the two arms of the interferometer and the other slit is centered directly under the other arm. The slit plane is followed immediately by an ideal combining lens which images the radiation in the fringe plane (also referred to subsequently as the detector plane).
Limited distortion in LSB steganography
NASA Astrophysics Data System (ADS)
Kim, Younhee; Duric, Zoran; Richards, Dana
2006-02-01
It is well known that all information hiding methods that modify the least significant bits introduce distortions into the cover objects. Those distortions have been utilized by steganalysis algorithms to detect that the objects had been modified. It has been proposed that only coefficients whose modification does not introduce large distortions should be used for embedding. In this paper we propose an effcient algorithm for information hiding in the LSBs of JPEG coefficients. Our algorithm uses parity coding to choose the coefficients whose modifications introduce minimal additional distortion. We derive the expected value of the additional distortion as a function of the message length and the probability distribution of the JPEG quantization errors of cover images. Our experiments show close agreement between the theoretical prediction and the actual additional distortion.
NASA Astrophysics Data System (ADS)
Kurz, Felix; Kampf, Thomas; Buschle, Lukas; Schlemmer, Heinz-Peter; Bendszus, Martin; Heiland, Sabine; Ziener, Christian
2016-12-01
In biological tissue, an accumulation of similarly shaped objects with a susceptibility difference to the surrounding tissue generates a local distortion of the external magnetic field in magnetic resonance imaging. It induces stochastic field fluctuations that characteristically influence proton spin diffusion in the vicinity of these magnetic perturbers. The magnetic field correlation that is associated with such local magnetic field inhomogeneities can be expressed in the form of a dynamic frequency autocorrelation function that is related to the time evolution of the measured magnetization. Here, an eigenfunction expansion for two simple magnetic perturber shapes, that of spheres and cylinders, is considered for restricted spin diffusion in a simple model geometry. Then, the concept of generalized moment analysis, an approximation technique that is applied in the study of (non-)reactive processes that involve Brownian motion, allows to provide analytical expressions for the correlation function for different exponential decay forms. Results for the biexponential decay for both spherical and cylindrical magnetized objects are derived and compared with the frequently used (less accurate) monoexponential decay forms. They are in asymptotic agreement with the numerically exact value of the correlation function for long and short times.
Self-paced model learning for robust visual tracking
NASA Astrophysics Data System (ADS)
Huang, Wenhui; Gu, Jason; Ma, Xin; Li, Yibin
2017-01-01
In visual tracking, learning a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and the frequency of model updating, which contains many details that could affect the tracking results. Self-paced learning (SPL) has recently been attracting considerable interest in the fields of machine learning and computer vision. SPL is inspired by the learning principle underlying the cognitive process of humans, whose learning process is generally from easier samples to more complex aspects of a task. We propose a tracking method that integrates the learning paradigm of SPL into visual tracking, so reliable samples can be automatically selected for model learning. In contrast to many existing model learning strategies in visual tracking, we discover the missing link between sample selection and model learning, which are combined into a single objective function in our approach. Sample weights and model parameters can be learned by minimizing this single objective function. Additionally, to solve the real-valued learning weight of samples, an error-tolerant self-paced function that considers the characteristics of visual tracking is proposed. We demonstrate the robustness and efficiency of our tracker on a recent tracking benchmark data set with 50 video sequences.
What is art and how does it differ from aesthetics?
Kreuzbauer, Robert
2017-01-01
Art objects differ from other objects because they are intentionally created to embody a producer's (i.e., artist's) expression. Hence, art objects are social objects whose appeal and value are determined largely by the strategic interaction between the artist and the audience. I discuss several aspects of how strategic interaction can affect an art object's perceived value and aesthetic appeal.
Jing, Xueping; Zheng, Xiujuan; Song, Shaoli; Liu, Kai
2017-12-01
Glomerular filtration rate (GFR), which can be estimated by Gates method with dynamic kidney single photon emission computed tomography (SPECT) imaging, is a key indicator of renal function. In this paper, an automatic computer tomography (CT)-assisted detection method of kidney region of interest (ROI) is proposed to achieve the objective and accurate GFR calculation. In this method, the CT coronal projection image and the enhanced SPECT synthetic image are firstly generated and registered together. Then, the kidney ROIs are delineated using a modified level set algorithm. Meanwhile, the background ROIs are also obtained based on the kidney ROIs. Finally, the value of GFR is calculated via Gates method. Comparing with the clinical data, the GFR values estimated by the proposed method were consistent with the clinical reports. This automatic method can improve the accuracy and stability of kidney ROI detection for GFR calculation, especially when the kidney function has been severely damaged.
Delay discounting moderates the effect of food reinforcement on energy intake among non-obese women☆
Rollins, Brandi Y.; Dearing, Kelly K.; Epstein, Leonard H.
2011-01-01
Recent theoretical approaches to food intake hypothesize that eating represents a balance between reward-driven motivation to eat versus inhibitory executive function processes, however this hypothesis remains to be tested. The objective of the current study was to test the hypothesis that the motivation to eat, operationalized by the relative reinforcing value (RRV) of food, and inhibitory processes, assessed by delay discounting (DD), interact to influence energy intake in an ad libitum eating task. Female subjects (n = 24) completed a DD of money procedure, RRV task, and an ad libitum eating task in counterbalanced sessions. RRV of food predicted total energy intake, however the effect of the RRV of food on energy intake was moderated by DD. Women higher in DD and RRV of food consumed greater total energy, whereas women higher in RRV of food but lower in DD consumed less total energy. Our findings support the hypothesis that reinforcing value and executive function mediated processes interactively influence food consumption. PMID:20678532
Fieselmann, Andreas; Dennerlein, Frank; Deuerling-Zheng, Yu; Boese, Jan; Fahrig, Rebecca; Hornegger, Joachim
2011-06-21
Filtered backprojection is the basis for many CT reconstruction tasks. It assumes constant attenuation values of the object during the acquisition of the projection data. Reconstruction artifacts can arise if this assumption is violated. For example, contrast flow in perfusion imaging with C-arm CT systems, which have acquisition times of several seconds per C-arm rotation, can cause this violation. In this paper, we derived and validated a novel spatio-temporal model to describe these kinds of artifacts. The model separates the temporal dynamics due to contrast flow from the scan and reconstruction parameters. We introduced derivative-weighted point spread functions to describe the spatial spread of the artifacts. The model allows prediction of reconstruction artifacts for given temporal dynamics of the attenuation values. Furthermore, it can be used to systematically investigate the influence of different reconstruction parameters on the artifacts. We have shown that with optimized redundancy weighting function parameters the spatial spread of the artifacts around a typical arterial vessel can be reduced by about 70%. Finally, an inversion of our model could be used as the basis for novel dynamic reconstruction algorithms that further minimize these artifacts.
Relations between information, time, and value of water
NASA Astrophysics Data System (ADS)
Weijs, S. V.; Galindo, L. C.
2015-12-01
This research uses with stochastic dynamic programming (SDP) as a tool to reveal economic information about managed water resources. An application to the operation of an example hydropower reservoir is presented. SDP explicitly balances the marginal value of water for immediate use and its expected opportunity cost of not having more water available for future use. The result of an SDP analysis is a steady state policy, which gives the optimal decision as a function of the state. A commonly applied form gives the optimal release as a function of the month, current reservoir level and current inflow to the reservoir. The steady state policy can be complemented with a real-time management strategy, that can depend on more real-time information. An information-theoretical perspective is given on how this information influences the value of water, and how to deal with that influence in hydropower reservoir optimization. This results in some conjectures about how the information gain from real-time operation could affect the optimal long term policy. Another issue is the sharing of increased benefits that result from this information gain in a multi-objective setting. It is argued that this should be accounted for in negotiations about an operation policy.
Pulmonary function of children with acute leukemia in maintenance phase of chemotherapy☆
de Macêdo, Thalita Medeiros Fernandes; Campos, Tania Fernandes; Mendes, Raquel Emanuele de França; França, Danielle Corrêa; Chaves, Gabriela Suéllen da Silva; de Mendonça, Karla Morganna Pereira Pinto
2014-01-01
OBJECTIVE: The aim of this study was to assess the pulmonary function of children with acute leukemia. METHODS: Cross-sectional observational analytical study that enrolled 34 children divided into groups A (17 with acute leukemia in the maintenance phase of chemotherapy) and B (17 healthy children). The groups were matched for sex, age and height. Spirometry was measured using a spirometer Microloop Viasys(r) in accordance with American Thoracic Society and European Respiratory Society guidelines. Maximal respiratory pressures were measured with an MVD300 digital manometer (Globalmed(r)). Maximal inspiratory pressures and maximal expiratory pressures were measured from residual volume and total lung capacity, respectively. RESULTS: Group A showed a significant decrease in maximal inspiratory pressures when compared to group B. No significant difference was found between the spirometric values of the two groups, nor was there any difference between maximal inspiratory pressure and maximal expiratory pressure values in group A compared to the lower limit values proposed as reference. CONCLUSION: Children with acute leukemia, myeloid or lymphoid, during the maintenance phase of chemotherapy exhibited unchanged spirometric variables and maximal expiratory pressure; However, there was a decrease in inspiratory muscle strength. PMID:25510995
Sleep Enhances a Spatially Mediated Generalization of Learned Values
ERIC Educational Resources Information Center
Javadi, Amir-Homayoun; Tolat, Anisha; Spiers, Hugo J.
2015-01-01
Sleep is thought to play an important role in memory consolidation. Here we tested whether sleep alters the subjective value associated with objects located in spatial clusters that were navigated to in a large-scale virtual town. We found that sleep enhances a generalization of the value of high-value objects to the value of locally clustered…
Spatial attention determines the nature of nonverbal number representation.
Hyde, Daniel C; Wood, Justin N
2011-09-01
Coordinated studies of adults, infants, and nonhuman animals provide evidence for two systems of nonverbal number representation: a "parallel individuation" system that represents individual items and a "numerical magnitude" system that represents the approximate cardinal value of a group. However, there is considerable debate about the nature and functions of these systems, due largely to the fact that some studies show a dissociation between small (1-3) and large (>3) number representation, whereas others do not. Using event-related potentials, we show that it is possible to determine which system will represent the numerical value of a small number set (1-3 items) by manipulating spatial attention. Specifically, when attention can select individual objects, an early brain response (N1) scales with the cardinal value of the display, the signature of parallel individuation. In contrast, when attention cannot select individual objects or is occupied by another task, a later brain response (P2p) scales with ratio, the signature of the approximate numerical magnitude system. These results provide neural evidence that small numbers can be represented as approximate numerical magnitudes. Further, they empirically demonstrate the importance of early attentional processes to number representation by showing that the way in which attention disperses across a scene determines which numerical system will deploy in a given context.
Accuracy of mini peak flow meters in indicating changes in lung function in children with asthma.
Sly, P. D.; Cahill, P.; Willet, K.; Burton, P.
1994-01-01
OBJECTIVE--To assess whether mini flow meters used to measure peak expiratory flow can track changes in lung function and indicate clinically important changes. DESIGN--Comparison of measurements with a spirometer and different brands of mini flow meter; the meters were allocated to subjects haphazardly. SUBJECTS--12 boys with asthma aged 11 to 17 attending boarding school. MAIN OUTCOME MEASURES--Peak expiratory flow measured twice daily for three months with a spirometer and at least one of four brands of mini flow meter. RESULTS--The relation between changes in lung function measured with the spirometer and those measured with the mini flow meters was generally poor. In all, 26 episodes (range 1-3 in an individual child) of clinically important deterioration in lung function were detected from the records obtained with the spirometer. One mini flow meter detected six of 19 episodes, one detected six of 15, one detected six of 18, and one detected three of 21. CONCLUSIONS--Not only are the absolute values of peak expiratory flow obtained with mini flow meters inaccurate but the clinical message may also be incorrect. These findings do not imply that home monitoring of peak expiratory flow has no place in the management of childhood asthma but that the values obtained should be interpreted cautiously. PMID:8148680
2012-06-01
Reference values of maximum isometric muscle force obtained in 270 children aged 4-16 years by hand-held dynamometry. Neuromuscul Disord. 2001;11(5...evaluation of specific muscle groups responsible for fatigue-related changes. Since fiber type proportion is determined by its innervation, evaluating muscle ... fiber output provides down-stream information about the integrity of the motor neuron. Objective To determine the association between muscle
Building a global business continuity programme.
Lazcano, Michael
2014-01-01
Business continuity programmes provide an important function within organisations, especially when aligned with and supportive of the organisation's goals, objectives and organisational culture. Continuity programmes for large, complex international organisations, unlike those for compact national companies, are more difficult to design, build, implement and maintain. Programmes for international organisations require attention to structural design, support across organisational leadership and hierarchy, seamless integration with the organisation's culture, measured success and demonstrated value. This paper details practical, but sometimes overlooked considerations for building successful global business continuity programmes.
Astronomical activities of the Apollo orbital science photographic team
NASA Technical Reports Server (NTRS)
Mercer, R. D.
1974-01-01
A partial accounting of Apollo Orbital Science Photographic Team (APST) work is presented as reported by one of its members who provided scientific recommendations for, guidance in, and reviews of photography in astronomy. Background on the formation of the team and its functions and management are discussed. It is concluded that the APST clearly performed the overall objective for which it was established - to improve the scientific value of the Apollo lunar missions. Specific reasons for this success are given.
Combining Trust and Behavioral Analysis to Detect Security Threats in Open Environments
2010-11-01
behavioral feature values. This would provide a baseline notional object trust and is formally defined as follows: TO(1)[0, 1] = ∑ 0,n:νbt wtP (S) (8...TO(2)[0, 1] = ∑ wtP (S) · identity(O,P ) (9) 28- 12 RTO-MP-IST-091 Combining Trust and Behavioral Analysis to Detect Security Threats in Open...respectively. The wtP weight function determines the significance of a particular behavioral feature in the final trust calculation. Note that the weight
Primacy of memory linkage in choice among valued objects.
Jones, Gregory V; Martin, Maryanne
2006-12-01
Three psychological levels at which an object may be processed have been characterized by Norman (2004) in terms of the object's appearance, its usability, and its capacity to elicit memories. A series of experiments was carried out to investigate participants' choices among valued objects recalled in accordance with these three criteria. It was found consistently that objects selected for their capacity to elicit memories--here termed mnemoactive objects--were valued significantly more than the other objects. Even the financial or social importance of an object was outweighed by the importance of its memory link; possible implications for the economic analysis of subjective well-being are briefly discussed. The same pattern of mnemoactive dominance was found across age and gender. Appropriate choice of objects may allow an individual to exert a degree of indirect voluntary control over the activation of involuntary autobiographical memories, providing a new perspective on Proust's approach to memory.
Cancho Gil, Ma J; Díz Rodríguez, R; Vírseda Chamorro, M; Alpuente Román, C; Cabrera Cabrera, J A; Paños Lozano, P
2005-04-01
The Extracorporeal shock waves lithotripsy (ESWL) is fundamental in the treatment of lithiasis. However, there are evidences that it can produce renal damage. The objective of our study is to determine the degree of affectation of the glomerular and tubular function after ESWL, and the influence of the lithiasis location on the type of renal damage. A prospective longitudinal study was carried out in 14 patients with normal renal function subjected to ESWL. We determined the basal level, and the levels at the 24 hours, at the 4th and the 10th day post ESWL of: microalbuminuria (MA) (that values the glomerular function), and N-acetyl glucosamide (NAG) and alanine aminopeptidase (AAP), (that value the tubular function). The basal levels of of MA, NAG and AAP didn't show significant differences in connection with the localization of the stones. A significant increase was observed of the three parameters only 24 hours post ESWL. No significant differences were observed between the variation of the microalbuminuria levels, AAP and NAG and the treatment in relation to the localization of the stones. It exists a glomerular and tubular damage after ESWL. This damage is not related with the pelvic or calicial location of the stones. In patient with previous normal renal function, the renal damage recovers at the 4th day post ESWL.
Is LMWH Sufficient for Anticoagulant Prophylaxis in Bariatric Surgery? Prospective Study.
Moaad, Farraj; Zakhar, Bramnik; Anton, Kvasha; Moner, Merie; Wisam, Sbeit; Safy, Farraj; Igor, Waksman
2017-09-01
The objective of this study was to evaluate the coagulation profile by thromboelastography in morbidly obese patients who undergo bariatric surgery. Morbid obesity entails increased risk for thromboembolic events. There is no clear protocol for thromboembolic prophylaxis, regarding timing and length of treatment, in bariatric surgery. Thromboelastography provides data on a coagulation process from creation of the clot until the fibrinolysis. Ninety-three morbidly obese patients were prospectively recruited within a 2-year period. Coagulation profile was measured by thromboelastography before surgery, in the immediate postoperative period, within 3 h from surgery, and in the late postoperative period, within 10-14 days after surgery. Venous thromboembolic prophylaxis was achieved by giving low molecular weight heparin (LMWH), once a day. Of the eligible patients, 67 underwent sleeve gastrectomy while 23 underwent Roux-en-Y gastric bypass. Normal values of coagulation factor function, clotting time, and fibrin function, as measured by R, K, and α (angle), were demonstrated in addition to higher maximal amplitude (MA) values, reflecting increased function of platelets. The average MA value before the surgery was above normal and continued rising consistently in the immediate postoperative as well as in the early postoperative period. Morbidly obese patients have a strong tendency toward thrombosis, as demonstrated by pathologically elevated MA values. Altered coagulation profiles were demonstrated 2 weeks postoperatively; thus, prophylaxis that continued at least for 2 weeks after bariatric surgery should be considered. Since LMW heparin is not sufficient alone as thromboembolic prophylaxis, we recommend adding antiplatelet therapy. Further evaluation of appropriate thromboprophylaxis is warranted.
Classification of subsurface objects using singular values derived from signal frames
Chambers, David H; Paglieroni, David W
2014-05-06
The classification system represents a detected object with a feature vector derived from the return signals acquired by an array of N transceivers operating in multistatic mode. The classification system generates the feature vector by transforming the real-valued return signals into complex-valued spectra, using, for example, a Fast Fourier Transform. The classification system then generates a feature vector of singular values for each user-designated spectral sub-band by applying a singular value decomposition (SVD) to the N.times.N square complex-valued matrix formed from sub-band samples associated with all possible transmitter-receiver pairs. The resulting feature vector of singular values may be transformed into a feature vector of singular value likelihoods and then subjected to a multi-category linear or neural network classifier for object classification.
NASA Astrophysics Data System (ADS)
Sutrisno, Widowati, Tjahjana, R. Heru
2017-12-01
The future cost in many industrial problem is obviously uncertain. Then a mathematical analysis for a problem with uncertain cost is needed. In this article, we deals with the fuzzy expected value analysis to solve an integrated supplier selection and supplier selection problem with uncertain cost where the costs uncertainty is approached by a fuzzy variable. We formulate the mathematical model of the problems fuzzy expected value based quadratic optimization with total cost objective function and solve it by using expected value based fuzzy programming. From the numerical examples result performed by the authors, the supplier selection problem was solved i.e. the optimal supplier was selected for each time period where the optimal product volume of all product that should be purchased from each supplier for each time period was determined and the product stock level was controlled as decided by the authors i.e. it was followed the given reference level.
1984-01-01
Recent investigations suggest that dispersion in aquifers is scale dependent and a function of the heterogeneity of aquifer materials. Theoretical stochastic studies indicate that determining hydraulic-conductivity variability in three dimensions is important in analyzing the dispersion process. Even though field methods are available to approximate hydraulic conductivity in three dimensions, the methods are not generally used because of high cost of field equipment and because measurement and analysis techniques are cumbersome and time consuming. The hypothesis of this study is that field-determined values of dispersivity are scale dependent and that they may be described as a function of hydraulic conductivity in three dimensions. The objectives of the study at the Bemidji research site are to (1) determine hydraulic conductivity of the porous media in three dimensions, (2) determine field values of dispersivity and its scale dependence on hydraulic conductivity, and (3) develop and apply a computerized data-collection, storage, and analysis system for field use in comprehensive determination of hydraulic conductivity and dispersivity. Plans for this investigation involve a variety of methods of analysis. Hydraulic conductivity will be determined separately in the horizontal and vertical planes of the hydraulic-conductivity ellipsoid. Field values of dispersivity will be determined by single-well and doublet-well injection or withdrawal tests with tracers. A computerized data-collection, storage, and analysis system to measure pressure, flow rate, tracer concentrations, and temperature will be designed for field testing. Real-time computer programs will be used to analyze field data. The initial methods of analysis will be utilized to meet the objectives of the study. Preliminary field data indicate the aquifer underlying the Bemidji site is vertically heterogeneous, cross-bedded outwash. Preliminary analysis of the flow field around a hypothetical doublet-well tracer test indicates that the location of the wells can affect the field value of dispersivity. Preliminary analysis also indicates that different values of dispersivity may result from anisotropic conditions in tests in which observation wells are located at equal radial distances from either the injection or withdrawal well.
NASA Astrophysics Data System (ADS)
Namysłowska-Wilczyńska, Barbara; Wynalek, Janusz
2017-12-01
Geostatistical methods make the analysis of measurement data possible. This article presents the problems directed towards the use of geostatistics in spatial analysis of displacements based on geodetic monitoring. Using methods of applied (spatial) statistics, the research deals with interesting and current issues connected to space-time analysis, modeling displacements and deformations, as applied to any large-area objects on which geodetic monitoring is conducted (e.g., water dams, urban areas in the vicinity of deep excavations, areas at a macro-regional scale subject to anthropogenic influences caused by mining, etc.). These problems are very crucial, especially for safety assessment of important hydrotechnical constructions, as well as for modeling and estimating mining damage. Based on the geodetic monitoring data, a substantial basic empirical material was created, comprising many years of research results concerning displacements of controlled points situated on the crown and foreland of an exemplary earth dam, and used to assess the behaviour and safety of the object during its whole operating period. A research method at a macro-regional scale was applied to investigate some phenomena connected with the operation of the analysed big hydrotechnical construction. Applying a semivariogram function enabled the spatial variability analysis of displacements. Isotropic empirical semivariograms were calculated and then, theoretical parameters of analytical functions were determined, which approximated the courses of the mentioned empirical variability measure. Using ordinary (block) kriging at the grid nodes of an elementary spatial grid covering the analysed object, the values of the Z* estimated means of displacements were calculated together with the accompanying assessment of uncertainty estimation - a standard deviation of estimation σk. Raster maps of the distribution of estimated averages Z* and raster maps of deviations of estimation σk (in perspective) were obtained for selected years (1995 and 2007), taking the ground height 136 m a.s.l. into calculation. To calculate raster maps of Z* interpolated values, methods of quick interpolation were also used, such as the technique of the inverse distance squares, a linear model of kriging, a spline kriging, which made the recognition of the general background of displacements possible, without the accuracy assessment of Z* value estimation, i.e., the value of σk. These maps are also related to 1995 and 2007 and the elevation. As a result of applying these techniques, clear boundaries of subsiding areas, upthrusting and also horizontal displacements on the examined hydrotechnical object were marked out, which can be interpreted as areas of local deformations of the object, important for the safety of the construction. The effect of geostatistical research conducted, including the structural analysis, semivariograms modeling, estimating the displacements of the hydrotechnical object, are rich cartographic characteristic (semivariograms, raster maps, block diagrams), which present the spatial visualization of the conducted various analyses of the monitored displacements. The prepared geostatistical model (3D) of displacement variability (analysed within the area of the dam, during its operating period and including its height) will be useful not only in the correct assessment of displacements and deformations, but it will also make it possible to forecast these phenomena, which is crucial when the operating safety of such constructions is taken into account.
Sun, Yuxiao; Wang, Jianan; Heine, Lizette; Huang, Wangshan; Wang, Jing; Hu, Nantu; Hu, Xiaohua; Fang, Xiaohui; Huang, Supeng; Laureys, Steven; Di, Haibo
2018-04-12
Behavioral assessment has been acted as the gold standard for the diagnosis of disorders of consciousness (DOC) patients. The item "Functional Object Use" in the motor function sub-scale in the Coma Recovery Scale-Revised (CRS-R) is a key item in differentiating between minimally conscious state (MCS) and emergence from MCS (EMCS). However, previous studies suggested that certain specific stimuli, especially something self-relevant can affect DOC patients' scores of behavioral assessment scale. So, we attempted to find out if personalized objects can improve the diagnosis of EMCS in the assessment of Functional Object Use by comparing the use of patients' favorite objects and other common objects in MCS patients. Twenty-one post-comatose patients diagnosed as MCS were prospectively included. The item "Functional Object Use" was assessed by using personalized objects (e.g., cigarette, paper) and non-personalized objects, which were presented in a random order. The rest assessments were performed following the standard protocol of the CRS-R. The differences between functional uses of the two types of objects were analyzed by the McNemar test. The incidence of Functional Object Use was significantly higher using personalized objects than non-personalized objects in the CRS-R. Five out of the 21 MCS studied patients, who were assessed with non-personalized objects, were re-diagnosed as EMCS with personalized objects (χ 2 = 5, df = 1, p < 0.05). Personalized objects employed here seem to be more effective to elicit patients' responses as compared to non-personalized objects during the assessment of Functional Object Use in DOC patients. Clinical Trials.gov: NCT02988206 ; Date of registration: 2016/12/12.
Acoustic energy relations in Mudejar-Gothic churches.
Zamarreño, Teófilo; Girón, Sara; Galindo, Miguel
2007-01-01
Extensive objective energy-based parameters have been measured in 12 Mudejar-Gothic churches in the south of Spain. Measurements took place in unoccupied churches according to the ISO-3382 standard. Monoaural objective measures in the 125-4000 Hz frequency range and in their spatial distributions were obtained. Acoustic parameters: clarity C80, definition D50, sound strength G and center time Ts have been deduced using impulse response analysis through a maximum length sequence measurement system in each church. These parameters spectrally averaged according to the most extended criteria in auditoria in order to consider acoustic quality were studied as a function of source-receiver distance. The experimental results were compared with predictions given by classical and other existing theoretical models proposed for concert halls and churches. An analytical semi-empirical model based on the measured values of the C80 parameter is proposed in this work for these spaces. The good agreement between predicted values and experimental data for definition, sound strength, and center time in the churches analyzed shows that the model can be used for design predictions and other purposes with reasonable accuracy.
CONSISTENT SCALING LAWS IN ANELASTIC SPHERICAL SHELL DYNAMOS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadav, Rakesh K.; Gastine, Thomas; Christensen, Ulrich R.
2013-09-01
Numerical dynamo models always employ parameter values that differ by orders of magnitude from the values expected in natural objects. However, such models have been successful in qualitatively reproducing properties of planetary and stellar dynamos. This qualitative agreement fuels the idea that both numerical models and astrophysical objects may operate in the same asymptotic regime of dynamics. This can be tested by exploring the scaling behavior of the models. For convection-driven incompressible spherical shell dynamos with constant material properties, scaling laws had been established previously that relate flow velocity and magnetic field strength to the available power. Here we analyzemore » 273 direct numerical simulations using the anelastic approximation, involving also cases with radius-dependent magnetic, thermal, and viscous diffusivities. These better represent conditions in gas giant planets and low-mass stars compared to Boussinesq models. Our study provides strong support for the hypothesis that both mean velocity and mean magnetic field strength scale as a function of the power generated by buoyancy forces in the same way for a wide range of conditions.« less
Prior knowledge guided active modules identification: an integrated multi-objective approach.
Chen, Weiqi; Liu, Jing; He, Shan
2017-03-14
Active module, defined as an area in biological network that shows striking changes in molecular activity or phenotypic signatures, is important to reveal dynamic and process-specific information that is correlated with cellular or disease states. A prior information guided active module identification approach is proposed to detect modules that are both active and enriched by prior knowledge. We formulate the active module identification problem as a multi-objective optimisation problem, which consists two conflicting objective functions of maximising the coverage of known biological pathways and the activity of the active module simultaneously. Network is constructed from protein-protein interaction database. A beta-uniform-mixture model is used to estimate the distribution of p-values and generate scores for activity measurement from microarray data. A multi-objective evolutionary algorithm is used to search for Pareto optimal solutions. We also incorporate a novel constraints based on algebraic connectivity to ensure the connectedness of the identified active modules. Application of proposed algorithm on a small yeast molecular network shows that it can identify modules with high activities and with more cross-talk nodes between related functional groups. The Pareto solutions generated by the algorithm provides solutions with different trade-off between prior knowledge and novel information from data. The approach is then applied on microarray data from diclofenac-treated yeast cells to build network and identify modules to elucidate the molecular mechanisms of diclofenac toxicity and resistance. Gene ontology analysis is applied to the identified modules for biological interpretation. Integrating knowledge of functional groups into the identification of active module is an effective method and provides a flexible control of balance between pure data-driven method and prior information guidance.
Hagman, George
2002-06-01
This paper proposes an integrative psychoanalytic model of the sense of beauty. The following definition is used: beauty is an aspect of the experience of idealisation in which an object(s), sound(s) or concept(s) is believed to possess qualities of formal perfection. The psychoanalytic literature regarding beauty is explored in depth and fundamental similarities are stressed. The author goes on to discuss the following topics: (1) beauty as sublimation: beauty reconciles the polarisation of self and world; (2) idealisation and beauty: the love of beauty is an indication of the importance of idealisation during development; (3) beauty as an interactive process: the sense of beauty is interactive and intersubjective; (4) the aesthetic and non-aesthetic emotions: specific aesthetic emotions are experienced in response to the formal design of the beautiful object; (5) surrendering to beauty: beauty provides us with an occasion for transcendence and self-renewal; (6) beauty's restorative function: the preservation or restoration of the relationship to the good object is of utmost importance; (7) the self-integrative function of beauty: the sense of beauty can also reconcile and integrate self-states of fragmentation and depletion; (8) beauty as a defence: in psychopathology, beauty can function defensively for the expression of unconscious impulses and fantasies, or as protection against self-crisis; (9) beauty and mortality: the sense of beauty can alleviate anxiety regarding death and feelings of vulnerability. In closing the paper, the author offers a new understanding of Freud'semphasis on love of beauty as a defining trait of civilisation. For a people not to value beauty would mean that they cannot hope and cannot assert life over the inevitable and ubiquitous forces of entropy and death.
A component compensation method for magnetic interferential field
NASA Astrophysics Data System (ADS)
Zhang, Qi; Wan, Chengbiao; Pan, Mengchun; Liu, Zhongyan; Sun, Xiaoyong
2017-04-01
A new component searching with scalar restriction method (CSSRM) is proposed for magnetometer to compensate magnetic interferential field caused by ferromagnetic material of platform and improve measurement performance. In CSSRM, the objection function for parameter estimation is to minimize magnetic field (components and magnitude) difference between its measurement value and reference value. Two scalar compensation method is compared with CSSRM and the simulation results indicate that CSSRM can estimate all interferential parameters and external magnetic field vector with high accuracy. The magnetic field magnitude and components, compensated with CSSRM, coincide with true value very well. Experiment is carried out for a tri-axial fluxgate magnetometer, mounted in a measurement system with inertial sensors together. After compensation, error standard deviation of both magnetic field components and magnitude are reduced from more than thousands nT to less than 20 nT. It suggests that CSSRM provides an effective way to improve performance of magnetic interferential field compensation.
The beauty of sensory ecology.
Otálora-Luna, Fernando; Aldana, Elis
2017-08-10
Sensory ecology is a discipline that focuses on how living creatures use information to survive, but not to live. By trans-defining the orthodox concept of sensory ecology, a serious heterodox question arises: how do organisms use their senses to live, i.e. to enjoy or suffer life? To respond to such a query the objective (time-independent) and emotional (non-rational) meaning of symbols must be revealed. Our program is distinct from both the neo-Darwinian and the classical ecological perspective because it does not focus on survival values of phenotypes and their functions, but asks for the aesthetic effect of biological structures and their symbolism. Our message recognizes that sensing apart from having a survival value also has a beauty value. Thus, we offer a provoking and inspiring new view on the sensory relations of 'living things' and their surroundings, where the innovating power of feelings have more weight than the privative power of reason.
Skin Physiology of the Neonate and Infant: Clinical Implications
Oranges, Teresa; Dini, Valentina; Romanelli, Marco
2015-01-01
Significance: The skin is a complex and dynamic organ that performs several vital functions. The maturation process of the skin starts at birth with the adaption of the skin to the comparatively dry environment compared to the in utero milieu. This adaptive flexibility results in the unique properties of infant skin. To deliver appropriate care to infant skin, it is necessary to understand that it is evolving with unique characteristics. Recent Advances: The role of biophysical noninvasive techniques in the assessment of skin development underlines the importance of an objective evaluation of skin physiology parameters. Skin hydration, transepidermal water loss, and pH values are measurable with specific instruments that give us an accurate and reproducible assessment during infant skin maturation. The recording of these values, following standard measurement procedures, allows us to evaluate the integrity of the skin barrier and to monitor the functionality of the maturing skin over time. Critical Issues: During the barrier development, impaired skin function makes the skin vulnerable to chemical damage, microbial infections, and skin diseases, possibly compromising the general health of the infant. Preterm newborns, during the first weeks of life, have an even less developed skin barrier and, therefore, are even more at risk. Thus, it is extremely important to evaluate the risk of infection, skin breakdown, topical agent absorption, and the risk of thermoregulation failure. Future Directions: Detailed and objective evaluations of infant skin maturation are necessary to improve infant skin care. The results of these evaluations should be formed into general protocols that will allow doctors and caregivers to give more personalized care to full-term newborns, preterm newborns, and infants. PMID:26487977
Hsu, Yen-Hsuan; Huang, Ching-Feng; Tu, Min-Chien; Hua, Mau-Sun
2014-01-01
Increasing studies suggest the importance of including prospective memory measures in clinical evaluation of dementia due to its sensitivity and functional relevance. The Prospective and Retrospective Memory Questionnaire (PRQM) is originally a self-rated memory inventory that offers a direct comparison between prospective and episodic memory. However, the informant's report has been recognized as a more valid source of cognitive complaints. We thus aimed to examine the validity of the informant-rated form of the PRMQ in assessing memory function of the patients and in detecting individuals with early dementia. The informants of 140 neurological outpatients with memory complaints completed the Taiwan version of the PRMQ. Tests of prospective memory, short-term memory, and general cognitive ability were also administered to non-demented participants and patients with early stages of Alzheimer's disease (AD). Results showed significant relationships between the PRMQ ratings and objective cognitive measures, and showed that higher ratings on the PRMQ were associated with increasing odds of greater dementia severity. Receiver operative characteristic (ROC) curves showed an adequate ability of the PRMQ to identify patients with dementia (93% sensitivity and 84% specificity). Hierarchical regression revealed that the PRMQ has additional explanatory power for dementia status after controlling for age, education and objective memory test results, and that the prospective memory subscale owns predictive value for dementia beyond the retrospective memory subscale. The present study demonstrated the external validity and diagnostic value of informants' evaluation of their respective patients' prospective and retrospective memory functioning, and highlighted the important role of prospective memory in early dementia detection. The proxy-version of the PRMQ is a useful tool that captures prospective and episodic memory problems in patients with early AD, in combination with standardized cognitive testing. PMID:25383950
Fabrication of nanotweezers and their remote actuation by magnetic fields.
Iss, Cécile; Ortiz, Guillermo; Truong, Alain; Hou, Yanxia; Livache, Thierry; Calemczuk, Roberto; Sabon, Philippe; Gautier, Eric; Auffret, Stéphane; Buda-Prejbeanu, Liliana D; Strelkov, Nikita; Joisten, Hélène; Dieny, Bernard
2017-03-27
A new kind of nanodevice that acts like tweezers through remote actuation by an external magnetic field is designed. Such device is meant to mechanically grab micrometric objects. The nanotweezers are built by using a top-down approach and are made of two parallelepipedic microelements, at least one of them being magnetic, bound by a flexible nanohinge. The presence of an external magnetic field induces a torque on the magnetic elements that competes with the elastic torque provided by the nanohinge. A model is established in order to evaluate the values of the balanced torques as a function of the tweezers opening angles. The results of the calculations are confronted to the expected values and validate the overall working principle of the magnetic nanotweezers.
A Functional Polymorphism of the MAOA Gene Modulates Spontaneous Brain Activity in Pons
Lei, Hui; Zhang, Xiaocui; Di, Xin; Rao, Hengyi; Ming, Qingsen; Zhang, Jibiao; Guo, Xiao; Jiang, Yali; Gao, Yidian; Yi, Jinyao; Zhu, Xiongzhao; Yao, Shuqiao
2014-01-01
Objective. To investigate the effects of a functional polymorphism of the monoamine oxidase A (MAOA) gene on spontaneous brain activity in healthy male adolescents. Methods. Thirty-one healthy male adolescents with the low-activity MAOA genotype (MAOA-L) and 25 healthy male adolescents with the high-activity MAOA genotype (MAOA-H) completed the 11-item Barratt Impulsiveness Scale (BIS-11) questionnaire and were subjected to resting-state functional magnetic resonance imaging (rs-fMRI) scans. The amplitude of low-frequency fluctuation (ALFF) of the blood oxygen level-dependent (BOLD) signal was calculated using REST software. ALFF data were related to BIS scores and compared between genotype groups. Results. Compared with the MAOA-H group, the MAOA-L group showed significantly lower ALFFs in the pons. There was a significant correlation between the BIS scores and the ALFF values in the pons for MAOA-L group, but not for the MAOA-H group. Further regression analysis showed a significant genotype by ALFF values interaction effect on BIS scores. Conclusions. Lower spontaneous brain activity in the pons of the MAOA-L male adolescents may provide a neural mechanism by which boys with the MAOA-L genotype confers risk for impulsivity and aggression. PMID:24971323
Kayataş, Semra; Özkaya, Enis; Api, Murat; Çıkman, Seyhan; Gürbüz, Ayşen; Eser, Ahmet
2017-01-01
Objective: The aim of the present study was to compare female sexual function between women who underwent conventional abdominal or laparoscopic hysterectomy. Materials and Methods: Seventy-seven women who were scheduled to undergo hysterectomy without oophorectomy for benign gynecologic conditions were included in the study. The women were assigned to laparoscopic or open abdominal hysterectomy according to the surgeons preference. Women with endometriosis and symptomatic prolapsus were excluded. Female sexual function scores were obtained before and six months after the operation from each participant by using validated questionnaires. Results: Pre- and postoperative scores of three different quationnaires were found as comparable in the group that underwent laparoscopic hysterectomy (p>0.05). Scores were also found as comparable in the group that underwent laparotomic hysterectomy (p>0.05). Pre- and postoperative values were compared between the two groups and revealed similar results with regard to all three scores (p>0.05). Conclusion: Our data showed comparable pre- and the postoperative scores for the two different hysterectomy techniques. The two groups were also found to have similar pre- and postoperative score values. PMID:28913149
NASA Astrophysics Data System (ADS)
Babier, Aaron; Boutilier, Justin J.; Sharpe, Michael B.; McNiven, Andrea L.; Chan, Timothy C. Y.
2018-05-01
We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate ‘inverse plans’ that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to automatically generate a new plan given a predicted or updated target DVH, respectively.
Babier, Aaron; Boutilier, Justin J; Sharpe, Michael B; McNiven, Andrea L; Chan, Timothy C Y
2018-05-10
We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate 'inverse plans' that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to automatically generate a new plan given a predicted or updated target DVH, respectively.
Incorporating uncertainty and motion in Intensity Modulated Radiation Therapy treatment planning
NASA Astrophysics Data System (ADS)
Martin, Benjamin Charles
In radiation therapy, one seeks to destroy a tumor while minimizing the damage to surrounding healthy tissue. Intensity Modulated Radiation Therapy (IMRT) uses overlapping beams of x-rays that add up to a high dose within the target and a lower dose in the surrounding healthy tissue. IMRT relies on optimization techniques to create high quality treatments. Unfortunately, the possible conformality is limited by the need to ensure coverage even if there is organ movement or deformation. Currently, margins are added around the tumor to ensure coverage based on an assumed motion range. This approach does not ensure high quality treatments. In the standard IMRT optimization problem, an objective function measures the deviation of the dose from the clinical goals. The optimization then finds the beamlet intensities that minimize the objective function. When modeling uncertainty, the dose delivered from a given set of beamlet intensities is a random variable. Thus the objective function is also a random variable. In our stochastic formulation we minimize the expected value of this objective function. We developed a problem formulation that is both flexible and fast enough for use on real clinical cases. While working on accelerating the stochastic optimization, we developed a technique of voxel sampling. Voxel sampling is a randomized algorithms approach to a steepest descent problem based on estimating the gradient by only calculating the dose to a fraction of the voxels within the patient. When combined with an automatic sampling rate adaptation technique, voxel sampling produced an order of magnitude speed up in IMRT optimization. We also develop extensions of our results to Intensity Modulated Proton Therapy (IMPT). Due to the physics of proton beams the stochastic formulation yields visibly different and better plans than normal optimization. The results of our research have been incorporated into a software package OPT4D, which is an IMRT and IMPT optimization tool that we developed.
Ruhdorfer, Anja; Wirth, Wolfgang; Eckstein, Felix
2014-01-01
Objective To determine the relationship between thigh muscle strength and clinically relevant differences in self-assessed lower limb function. Methods Isometric knee extensor and flexor strength of 4553 Osteoarthritis Initiative participants (2651 women/1902 men) was related to Western Ontario McMasters Universities (WOMAC) physical function scores by linear regression. Further, groups of Male and female participant strata with minimal clinically important differences (MCIDs) in WOMAC function scores (6/68) were compared across the full range of observed values, and to participants without functional deficits (WOMAC=0). The effect of WOMAC knee pain and body mass index on the above relationships was explored using stepwise regression. Results Per regression equations, a 3.7% reduction in extensor and a 4.0% reduction in flexor strength were associated with an MCID in WOMAC function in women, and a 3.6%/4.8% reduction in men. For strength divided by body weight, reductions were 5.2%/6.7% in women and 5.8%/6.7% in men. Comparing MCID strata across the full observed range of WOMAC function confirmed the above estimates and did not suggest non-linear relationships across the spectrum of observed values. WOMAC pain correlated strongly with WOMAC function, but extensor (and flexor) muscle strength contributed significant independent information. Conclusion Reductions of approximately 4% in isometric muscle strength and of 6% in strength/weight were related to a clinically relevant difference in WOMAC functional disability. Longitudinal studies will need to confirm these relationships within persons. Muscle extensor (and flexor) strength (per body weight) provided significant independent information in addition to pain in explaining variability in lower limb function. PMID:25303012
A neural model of valuation and information virality
Baek, Elisa C.; O’Donnell, Matthew Brook; Kim, Hyun Suk; Cappella, Joseph N.
2017-01-01
Information sharing is an integral part of human interaction that serves to build social relationships and affects attitudes and behaviors in individuals and large groups. We present a unifying neurocognitive framework of mechanisms underlying information sharing at scale (virality). We argue that expectations regarding self-related and social consequences of sharing (e.g., in the form of potential for self-enhancement or social approval) are integrated into a domain-general value signal that encodes the value of sharing a piece of information. This value signal translates into population-level virality. In two studies (n = 41 and 39 participants), we tested these hypotheses using functional neuroimaging. Neural activity in response to 80 New York Times articles was observed in theory-driven regions of interest associated with value, self, and social cognitions. This activity then was linked to objectively logged population-level data encompassing n = 117,611 internet shares of the articles. In both studies, activity in neural regions associated with self-related and social cognition was indirectly related to population-level sharing through increased neural activation in the brain's value system. Neural activity further predicted population-level outcomes over and above the variance explained by article characteristics and commonly used self-report measures of sharing intentions. This parsimonious framework may help advance theory, improve predictive models, and inform new approaches to effective intervention. More broadly, these data shed light on the core functions of sharing—to express ourselves in positive ways and to strengthen our social bonds. PMID:28242678
Conceptual model of consumer’s willingness to eat functional foods
Babicz-Zielinska, Ewa; Jezewska-Zychowicz, Maria
The functional foods constitute the important segment of the food market. Among factors that determine the intentions to eat functional foods, the psychological factors play very important roles. Motives, attitudes and personality are key factors. The relationships between socio-demographic characteristics, attitudes and willingness to purchase functional foods were not fully confirmed. Consumers’ beliefs about health benefits from eaten foods seem to be a strong determinant of a choice of functional foods. The objective of this study was to determine relations between familiarity, attitudes, and beliefs in benefits and risks about functional foods and develop some conceptual models of willingness to eat. The sample of Polish consumers counted 1002 subjects at age 15+. The foods enriched with vitamins or minerals, and cholesterol-lowering margarine or drinks were considered. The questionnaire focused on familiarity with foods, attitudes, beliefs about benefits and risks of their consumption was constructed. The Pearson’s correlations and linear regression equations were calculated. The strongest relations appeared between attitudes, high health value and high benefits, (r = 0.722 and 0.712 for enriched foods, and 0.664 and 0.693 for cholesterol-lowering foods), and between high health value and high benefits (0.814 for enriched foods and 0.758 for cholesterol-lowering foods). The conceptual models based on linear regression of relations between attitudes and all other variables, considering or not the familiarity with the foods, were developed. The positive attitudes and declared consumption are more important for enriched foods. The beliefs on high health value and high benefits play the most important role in the purchase. The interrelations between different variables may be described by new linear regression models, with the beliefs in high benefits, positive attitudes and familiarity being most significant predictors. Health expectations and trust to functional foods are the key factors in their choice.
Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou
2012-01-01
Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise.
Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou
2012-01-01
Background Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. Results A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. Conclusions This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise. PMID:23028708
Chun, Kwang-Soo; Lee, Yong-Taek; Park, Jong-Wan; Lee, Joon-Youn; Park, Chul-Hyun
2016-01-01
Objective To compare diffusion tensor tractography (DTT) and motor evoked potentials (MEPs) for estimation of clinical status in patients in the subacute stage of stroke. Methods Patients with hemiplegia due to stroke who were evaluated using both DTT and MEPs between May 2012 and April 2015 were recruited. Clinical assessments investigated upper extremity motor and functional status. Motor status was evaluated using Medical Research Council grading and the Fugl-Meyer Assessment of upper limb and hand (FMA-U and FMA-H). Functional status was measured using the Modified Barthel Index (MBI). Patients were classified into subgroups according to DTT findings, MEP presence, fractional anisotropy (FA) value, FA ratio (rFA), and central motor conduction time (CMCT). Correlations of clinical assessments with DTT parameters and MEPs were estimated. Results Fifty-five patients with hemiplegia were recruited. In motor assessments (FMA-U), MEPs had the highest sensitivity and negative predictive value (NPV) as well as the second highest specificity and positive predictive value (PPV). CMCT showed the highest specificity and PPV. Regarding functional status (MBI), FA showed the highest sensitivity and NPV, whereas CMCT had the highest specificity and PPV. Correlation analysis showed that the resting motor threshold (RMT) ratio was strongly associated with motor status of the upper limb, and MEP parameters were not associated with MBI. Conclusion DTT and MEPs could be suitable complementary modalities for analyzing the motor and functional status of patients in the subacute stage of stroke. The RMT ratio was strongly correlated with motor status. PMID:26949679
'Proactive' use of cue-context congruence for building reinforcement learning's reward function.
Zsuga, Judit; Biro, Klara; Tajti, Gabor; Szilasi, Magdolna Emma; Papp, Csaba; Juhasz, Bela; Gesztelyi, Rudolf
2016-10-28
Reinforcement learning is a fundamental form of learning that may be formalized using the Bellman equation. Accordingly an agent determines the state value as the sum of immediate reward and of the discounted value of future states. Thus the value of state is determined by agent related attributes (action set, policy, discount factor) and the agent's knowledge of the environment embodied by the reward function and hidden environmental factors given by the transition probability. The central objective of reinforcement learning is to solve these two functions outside the agent's control either using, or not using a model. In the present paper, using the proactive model of reinforcement learning we offer insight on how the brain creates simplified representations of the environment, and how these representations are organized to support the identification of relevant stimuli and action. Furthermore, we identify neurobiological correlates of our model by suggesting that the reward and policy functions, attributes of the Bellman equitation, are built by the orbitofrontal cortex (OFC) and the anterior cingulate cortex (ACC), respectively. Based on this we propose that the OFC assesses cue-context congruence to activate the most context frame. Furthermore given the bidirectional neuroanatomical link between the OFC and model-free structures, we suggest that model-based input is incorporated into the reward prediction error (RPE) signal, and conversely RPE signal may be used to update the reward-related information of context frames and the policy underlying action selection in the OFC and ACC, respectively. Furthermore clinical implications for cognitive behavioral interventions are discussed.
Pulmonary function in adolescents with ataxia telangiectasia.
McGrath-Morrow, Sharon; Lefton-Greif, Maureen; Rosquist, Karen; Crawford, Thomas; Kelly, Amber; Zeitlin, Pamela; Carson, Kathryn A; Lederman, Howard M
2008-01-01
Pulmonary complications are common in adolescents with ataxia telangiectasia (A-T), however objective measurements of lung function may be difficult to obtain because of underlying bulbar weakness, tremors, and difficulty coordinating voluntary respiratory maneuvers. To increase the reliability of pulmonary testing, minor adjustments were made to stabilize the head and to minimize leaks in the system. Fifteen A-T adolescents completed lung volume measurements by helium dilution. To assess for reproducibility of spirometry testing, 10 A-T adolescents performed spirometry on three separate occasions. Total lung capacity (TLC) was normal or just mildly decreased in 12/15 adolescents tested. TLC correlated positively with functional residual capacity (FRC), a measurement independent of patient effort (R2=0.71). The majority of individuals had residual volumes (RV) greater than 120% predicted (10/15) and slow vital capacities (VC) less than 70% predicted (9/15). By spirometry, force vital capacity (FVC) and forced expiratory volume in 1 sec (FEV1) values were reproducible in the 10 individuals who underwent testing on three separate occasions (R=0.97 and 0.96 respectively). Seven of the 10 adolescents had FEV1/FVC ratios>90%. Lung volume measurements from A-T adolescents revealed near normal TLC values with increased RV and decreased VC values. These findings indicate a decreased ability to expire to residual volume rather then a restrictive defect. Spirometry was also found to be reproducible in A-T adolescents suggesting that spirometry testing may be useful for tracking changes in pulmonary function over time in this population. Copyright (c) 2007 Wiley-Liss, Inc.
The influence of contextual reward statistics on risk preference
Rigoli, Francesco; Rutledge, Robb B.; Dayan, Peter; Dolan, Raymond J.
2016-01-01
Decision theories mandate that organisms should adjust their behaviour in the light of the contextual reward statistics. We tested this notion using a gambling choice task involving distinct contexts with different reward distributions. The best fitting model of subjects' behaviour indicated that the subjective values of options depended on several factors, including a baseline gambling propensity, a gambling preference dependent on reward amount, and a contextual reward adaptation factor. Combining this behavioural model with simultaneous functional magnetic resonance imaging we probed neural responses in three key regions linked to reward and value, namely ventral tegmental area/substantia nigra (VTA/SN), ventromedial prefrontal cortex (vmPFC) and ventral striatum (VST). We show that activity in the VTA/SN reflected contextual reward statistics to the extent that context affected behaviour, activity in the vmPFC represented a value difference between chosen and unchosen options while VST responses reflected a non-linear mapping between the actual objective rewards and their subjective value. The findings highlight a multifaceted basis for choice behaviour with distinct mappings between components of this behaviour and value sensitive brain regions. PMID:26707890
Encoding of marginal utility across time in the human brain.
Pine, Alex; Seymour, Ben; Roiser, Jonathan P; Bossaerts, Peter; Friston, Karl J; Curran, H Valerie; Dolan, Raymond J
2009-07-29
Marginal utility theory prescribes the relationship between the objective property of the magnitude of rewards and their subjective value. Despite its pervasive influence, however, there is remarkably little direct empirical evidence for such a theory of value, let alone of its neurobiological basis. We show that human preferences in an intertemporal choice task are best described by a model that integrates marginally diminishing utility with temporal discounting. Using functional magnetic resonance imaging, we show that activity in the dorsal striatum encodes both the marginal utility of rewards, over and above that which can be described by their magnitude alone, and the discounting associated with increasing time. In addition, our data show that dorsal striatum may be involved in integrating subjective valuation systems inherent to time and magnitude, thereby providing an overall metric of value used to guide choice behavior. Furthermore, during choice, we show that anterior cingulate activity correlates with the degree of difficulty associated with dissonance between value and time. Our data support an integrative architecture for decision making, revealing the neural representation of distinct subcomponents of value that may contribute to impulsivity and decisiveness.
Reasoning about Function Objects
NASA Astrophysics Data System (ADS)
Nordio, Martin; Calcagno, Cristiano; Meyer, Bertrand; Müller, Peter; Tschannen, Julian
Modern object-oriented languages support higher-order implementations through function objects such as delegates in C#, agents in Eiffel, or closures in Scala. Function objects bring a new level of abstraction to the object-oriented programming model, and require a comparable extension to specification and verification techniques. We introduce a verification methodology that extends function objects with auxiliary side-effect free (pure) methods to model logical artifacts: preconditions, postconditions and modifies clauses. These pure methods can be used to specify client code abstractly, that is, independently from specific instantiations of the function objects. To demonstrate the feasibility of our approach, we have implemented an automatic prover, which verifies several non-trivial examples.
Conflict between object structural and functional affordances in peripersonal space.
Kalénine, Solène; Wamain, Yannick; Decroix, Jérémy; Coello, Yann
2016-10-01
Recent studies indicate that competition between conflicting action representations slows down planning of object-directed actions. The present study aims to assess whether similar conflict effects exist during manipulable object perception. Twenty-six young adults performed reach-to-grasp and semantic judgements on conflictual objects (with competing structural and functional gestures) and non-conflictual objects (with similar structural and functional gestures) presented at difference distances in a 3D virtual environment. Results highlight a space-dependent conflict between structural and functional affordances. Perceptual judgments on conflictual objects were slower that perceptual judgments on non-conflictual objects, but only when objects were presented within reach. Findings demonstrate that competition between structural and functional affordances during object perception induces a processing cost, and further show that object position in space can bias affordance competition. Copyright © 2016 Elsevier B.V. All rights reserved.
UCODE, a computer code for universal inverse modeling
Poeter, E.P.; Hill, M.C.
1999-01-01
This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating system: it consists of algorithms programmed in perl, a freeware language designed for text manipulation and Fortran90, which efficiently performs numerical calculations.
A case study of assigning conservation value to dispersed habitat units for conservation planning
Rohweder, Jason J.; Sara C. Vacek,; Crimmins, Shawn M.; Thogmartin, Wayne E.
2015-01-01
Resource managers are increasingly tasked with developing habitat conservation plans in the face of numerous, sometimes competing, objectives. These plans must often be implemented across dispersed habitat conservation units that may contribute unequally to overall conservation objectives. Using U.S. Fish and Wildlife Service waterfowl production areas (WPA) in western Minnesota as our conservation landscape, we develop a landscape-scale approach for evaluating the conservation value of dispersed habitat conservation units with multiple conservation priorities. We evaluated conservation value based on a suite of variables directly applicable to conservation management practices, thus providing a direct link between conservation actions and outcomes. We developed spatial models specific to each of these conservation objectives and also developed two freely available prioritization tools to implement these analyses. We found that some WPAs provided high conservation value across a range of conservation objectives, suggesting that managing these specific areas would achieve multiple conservation goals. Conversely, other WPAs provided low conservation value for some objectives, suggesting they would be most effectively managed for a distinct set of specific conservation goals. Approaches such as ours provide a direct means of assessing the conservation value of dispersed habitat conservation units and could be useful in the development of habitat management plans, particularly when faced with multiple conservation objectives.
NASA Astrophysics Data System (ADS)
Fangyu, Fu; Yu, Cao
2017-05-01
This paper takes Caiyuan Village in Jingmen City of Hubei Province as the research object, analyzes the production, life and ecological functions of rural buildings and the “3F-in-1” inherent mechanism from the local perspective. Based on the concept analysis of placeality and “3F-in-1”, this paper clarifies the relationship among the value of life function, production function, ecological function so as to analyze the “3F-in-1” mode of rural architecture with placeality. On this basis, this thesis puts forward the strategy of sustainable spatial transformation (1) preserve the traditional overall spatial structure of villages, (2) improve the adaptability and function of rural architecture, (3) extend the rural social culture, (4) pay attention to local perception, with a view to explore an organic system design method for the exhibition of placeality and sustainable development of beautiful countryside.
NASA Astrophysics Data System (ADS)
Obracaj, Piotr; Fabianowski, Dariusz
2017-10-01
Implementations concerning adaptation of historic facilities for public utility objects are associated with the necessity of solving many complex, often conflicting expectations of future users. This mainly concerns the function that includes construction, technology and aesthetic issues. The list of issues is completed with proper protection of historic values, different in each case. The procedure leading to obtaining the expected solution is a multicriteria procedure, usually difficult to accurately define and requiring designer’s large experience. An innovative approach has been used for the analysis, namely - the modified EA FAHP (Extent Analysis Fuzzy Analytic Hierarchy Process) Chang’s method of a multicriteria analysis for the assessment of complex functional and spatial issues. Selection of optimal spatial form of an adapted historic building intended for the multi-functional public utility facility was analysed. The assumed functional flexibility was determined in the scope of: education, conference, and chamber spectacles, such as drama, concerts, in different stage-audience layouts.
Dissipation function and adaptive gradient reconstruction based smoke detection in video
NASA Astrophysics Data System (ADS)
Li, Bin; Zhang, Qiang; Shi, Chunlei
2017-11-01
A method for smoke detection in video is proposed. The camera monitoring the scene is assumed to be stationary. With the atmospheric scattering model, dissipation function is reflected transmissivity between the background objects in the scene and the camera. Dark channel prior and fast bilateral filter are used for estimating dissipation function which is only the function of the depth of field. Based on dissipation function, visual background extractor (ViBe) can be used for detecting smoke as a result of smoke's motion characteristics as well as detecting other moving targets. Since smoke has semi-transparent parts, the things which are covered by these parts can be recovered by poisson equation adaptively. The similarity between the recovered parts and the original background parts in the same position is calculated by Normalized Cross Correlation (NCC) and the original background's value is selected from the frame which is nearest to the current frame. The parts with high similarity are considered as smoke parts.
NASA Astrophysics Data System (ADS)
Enescu (Balaş, M. L.; Alexandru, C.
2016-08-01
The paper deals with the optimal design of the control system for a 6-DOF robot used in thin layers deposition. The optimization is based on parametric technique, by modelling the design objective as a numerical function, and then establishing the optimal values of the design variables so that to minimize the objective function. The robotic system is a mechatronic product, which integrates the mechanical device and the controlled operating device.The mechanical device of the robot was designed in the CAD (Computer Aided Design) software CATIA, the 3D-model being then transferred to the MBS (Multi-Body Systems) environment ADAMS/View. The control system was developed in the concurrent engineering concept, through the integration with the MBS mechanical model, by using the DFC (Design for Control) software solution EASY5. The necessary angular motions in the six joints of the robot, in order to obtain the imposed trajectory of the end-effector, have been established by performing the inverse kinematic analysis. The positioning error in each joint of the robot is used as design objective, the optimization goal being to minimize the root mean square during simulation, which is a measure of the magnitude of the positioning error varying quantity.
NASA Astrophysics Data System (ADS)
Zoratti, Paul K.; Gilbert, R. Kent; Majewski, Ronald; Ference, Jack
1995-12-01
Development of automotive collision warning systems has progressed rapidly over the past several years. A key enabling technology for these systems is millimeter-wave radar. This paper addresses a very critical millimeter-wave radar sensing issue for automotive radar, namely the scattering characteristics of common roadway objects such as vehicles, roadsigns, and bridge overpass structures. The data presented in this paper were collected on ERIM's Fine Resolution Radar Imaging Rotary Platform Facility and processed with ERIM's image processing tools. The value of this approach is that it provides system developers with a 2D radar image from which information about individual point scatterers `within a single target' can be extracted. This information on scattering characteristics will be utilized to refine threat assessment processing algorithms and automotive radar hardware configurations. (1) By evaluating the scattering characteristics identified in the radar image, radar signatures as a function of aspect angle for common roadway objects can be established. These signatures will aid in the refinement of threat assessment processing algorithms. (2) Utilizing ERIM's image manipulation tools, total RCS and RCS as a function of range and azimuth can be extracted from the radar image data. This RCS information will be essential in defining the operational envelope (e.g. dynamic range) within which any radar sensor hardware must be designed.
Mirror neurons encode the subjective value of an observed action.
Caggiano, Vittorio; Fogassi, Leonardo; Rizzolatti, Giacomo; Casile, Antonino; Giese, Martin A; Thier, Peter
2012-07-17
Objects grasped by an agent have a value not only for the acting agent, but also for an individual observing the grasping act. The value that the observer attributes to the object that is grasped can be pivotal for selecting a possible behavioral response. Mirror neurons in area F5 of the monkey premotor cortex have been suggested to play a crucial role in the understanding of action goals. However, it has not been addressed if these neurons are also involved in representing the value of the grasped object. Here we report that observation-related neuronal responses of F5 mirror neurons are indeed modulated by the value that the monkey associates with the grasped object. These findings suggest that during action observation F5 mirror neurons have access to key information needed to shape the behavioral responses of the observer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balian, R., E-mail: roger.balian@cea.fr; Vénéroni, M.
Time-dependent expectation values and correlation functions for many-body quantum systems are evaluated by means of a unified variational principle. It optimizes a generating functional depending on sources associated with the observables of interest. It is built by imposing through Lagrange multipliers constraints that account for the initial state (at equilibrium or off equilibrium) and for the backward Heisenberg evolution of the observables. The trial objects are respectively akin to a density operator and to an operator involving the observables of interest and the sources. We work out here the case where trial spaces constitute Lie groups. This choice reduces themore » original degrees of freedom to those of the underlying Lie algebra, consisting of simple observables; the resulting objects are labeled by the indices of a basis of this algebra. Explicit results are obtained by expanding in powers of the sources. Zeroth and first orders provide thermodynamic quantities and expectation values in the form of mean-field approximations, with dynamical equations having a classical Lie–Poisson structure. At second order, the variational expression for two-time correlation functions separates–as does its exact counterpart–the approximate dynamics of the observables from the approximate correlations in the initial state. Two building blocks are involved: (i) a commutation matrix which stems from the structure constants of the Lie algebra; and (ii) the second-derivative matrix of a free-energy function. The diagonalization of both matrices, required for practical calculations, is worked out, in a way analogous to the standard RPA. The ensuing structure of the variational formulae is the same as for a system of non-interacting bosons (or of harmonic oscillators) plus, at non-zero temperature, classical Gaussian variables. This property is explained by mapping the original Lie algebra onto a simpler Lie algebra. The results, valid for any trial Lie group, fulfill consistency properties and encompass several special cases: linear responses, static and time-dependent fluctuations, zero- and high-temperature limits, static and dynamic stability of small deviations.« less
NASA Astrophysics Data System (ADS)
Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.
2015-12-01
We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.
Quantifying Groundwater Model Uncertainty
NASA Astrophysics Data System (ADS)
Hill, M. C.; Poeter, E.; Foglia, L.
2007-12-01
Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.
Values Education as Perceived by Social Studies Teachers in Objective and Practice Dimensions
ERIC Educational Resources Information Center
Katilmis, Ahmet
2017-01-01
The purpose of this study was to reveal the objectives of values education in Turkey, values education-related activities performed in schools, and preferred approaches to values education according to the opinions of social studies teachers. This qualitative study used a phenomenological pattern. The participants of the study were selected from…
A New Glaucoma Severity Score Combining Structural and Functional Defects.
Wachtl, J; Töteberg-Harms, M; Frimmel, S; Kniestedt, C
2017-04-01
Background In order to assess glaucoma severity and to compare the success of surgical and medical therapy and study outcomes, an objective and independent staging tool is necessary. A combination of information from both structural and functional testing is probably the best approach to stage glaucomatous damage. There has been no universally accepted standard for glaucoma staging. The aim of this study was to develop a Glaucoma Severity Score (GSS) for objective assessment of a patient's glaucoma severity, combining both functional and structural information. Materials and methods The Glaucoma Severity Score includes the following 3 criteria: superior and inferior Retinal Nerve Fibre Layer (RNFL) thickness, perimetric mean defect (MD), and agreement of anatomical and perimetric defects, as assessed by two glaucoma specialists. The specialists defined a staging tool for each of the 3 criteria in a consensus process, assigning specific characteristics to a scale value between 0 and 2 or 0 and 3, respectively. The GSS ranges between 0 and 10 points. In a prospective observational study, the data of 112 glaucoma patients were assessed independently by the two specialists according to this staging tool. Results The GSS was applied to 112 eyes and patients (59.8 % female) with a mean age of 66.3 ± 13.1 years. Mean GSS was 4.73 points. Cohen's kappa coefficient was determined to measure inter-rater agreement between glaucoma specialists for the third criterion. With κ = 0.83, the agreement was very good. Thus, all 3 criteria of the GSS may be regarded as objective. Conclusions The Glaucoma Severity Score is an objective tool, combining both structural and functional characteristics, and permitting comparison of different patients, populations and studies. The Glaucoma Severity Score has proven effective in the objective assessment of 112 glaucoma patients and is relatively user-friendly in clinical practice. A comparative study of the GSS with the results of the FORUM® Glaucoma Workplace (Carl Zeiss Meditec AG, Jena, Germany) will be the next step. If outcomes match, the Glaucoma Severity Score can be accepted as a promising tool to stage glaucoma and monitor changes objectively in patients when comparing glaucoma progression in study analyses. Georg Thieme Verlag KG Stuttgart · New York.
Parmar, Sanjay; Gandhi, Dorcas BC; Rempel, Gina Ruth; Restall, Gayle; Sharma, Monika; Narayan, Amitesh; Pandian, Jeyaraj; Naik, Nilashri; Savadatti, Ravi R; Kamate, Mahesh Appasaheb
2017-01-01
Background It is difficult to engage young children with cerebral palsy (CP) in repetitive, tedious therapy. As such, there is a need for innovative approaches and tools to motivate these children. We developed the low-cost, computer game-based rehabilitation platform CGR that combines fine manipulation and gross movement exercises with attention and planning game activities appropriate for young children with CP. Objective The objective of this study is to provide evidence of the therapeutic value of CGR to improve upper extremity (UE) motor function for children with CP. Methods This randomized controlled, single-blind, clinical trial with an active control arm will be conducted at 4 sites. Children diagnosed with CP between the ages of 4 and 10 years old with moderate UE impairments and fine motor control abnormalities will be recruited. Results We will test the difference between experimental and control groups using the Quality of Upper Extremity Skills Test (QUEST) and Peabody Developmental Motor Scales, Second Edition (PDMS-2) outcome measures. The parents of the children and the therapist experiences with the interventions and tools will be explored using semi-structured interviews using the qualitative description approach. Conclusions This research protocol, if effective, will provide evidence for the therapeutic value and feasibility of CGR in the pediatric rehabilitation of UE function. Trial Registration Clinicaltrials.gov NCT02728375; http:https://clinicaltrials.gov/ct2/show/NCT02728375 (Archived by WebCite at http://www.webcitation.org/6qDjvszvh) PMID:28526673
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
Kraaijenga, Sophie A C; van der Molen, Lisette; Jacobi, Irene; Hamming-Vrieze, Olga; Hilgers, Frans J M; van den Brekel, Michiel W M
2015-11-01
Concurrent chemoradiotherapy (CCRT) for advanced head and neck cancer (HNC) is associated with substantial early and late side effects, most notably regarding swallowing function, but also regarding voice quality and quality of life (QoL). Despite increased awareness/knowledge on acute dysphagia in HNC survivors, long-term (i.e., beyond 5 years) prospectively collected data on objective and subjective treatment-induced functional outcomes (and their impact on QoL) still are scarce. The objective of this study was the assessment of long-term CCRT-induced results on swallowing function and voice quality in advanced HNC patients. The study was conducted as a randomized controlled trial on preventive swallowing rehabilitation (2006-2008) in a tertiary comprehensive HNC center with twenty-two disease-free and evaluable HNC patients as participants. Multidimensional assessment of functional sequels was performed with videofluoroscopy, mouth opening measurements, Functional Oral Intake Scale, acoustic voice parameters, and (study specific, SWAL-QoL, and VHI) questionnaires. Outcome measures at 6 years post-treatment were compared with results at baseline and at 2 years post-treatment. At a mean follow-up of 6.1 years most initial tumor-, and treatment-related problems remained similarly low to those observed after 2 years follow-up, except increased xerostomia (68%) and increased (mild) pain (32%). Acoustic voice analysis showed less voicedness, increased fundamental frequency, and more vocal effort for the tumors located below the hyoid bone (n = 12), without recovery to baseline values. Patients' subjective vocal function (VHI score) was good. Functional swallowing and voice problems at 6 years post-treatment are minimal in this patient cohort, originating from preventive and continued post-treatment rehabilitation programs.
Integrating economic parameters into genetic selection for Large White pigs.
Dube, Bekezela; Mulugeta, Sendros D; Dzama, Kennedy
2013-08-01
The objective of the study was to integrate economic parameters into genetic selection for sow productivity, growth performance and carcass characteristics in South African Large White pigs. Simulation models for sow productivity and terminal production systems were performed based on a hypothetical 100-sow herd, to derive economic values for the economically relevant traits. The traits included in the study were number born alive (NBA), 21-day litter size (D21LS), 21-day litter weight (D21LWT), average daily gain (ADG), feed conversion ratio (FCR), age at slaughter (AGES), dressing percentage (DRESS), lean content (LEAN) and backfat thickness (BFAT). Growth of a pig was described by the Gompertz growth function, while feed intake was derived from the nutrient requirements of pigs at the respective ages. Partial budgeting and partial differentiation of the profit function were used to derive economic values, which were defined as the change in profit per unit genetic change in a given trait. The respective economic values (ZAR) were: 61.26, 38.02, 210.15, 33.34, -21.81, -68.18, 5.78, 4.69 and -1.48. These economic values indicated the direction and emphases of selection, and were sensitive to changes in feed prices and marketing prices for carcasses and maiden gilts. Economic values for NBA, D21LS, DRESS and LEAN decreased with increasing feed prices, suggesting a point where genetic improvement would be a loss, if feed prices continued to increase. The economic values for DRESS and LEAN increased as the marketing prices for carcasses increased, while the economic value for BFAT was not sensitive to changes in all prices. Reductions in economic values can be counterbalanced by simultaneous increases in marketing prices of carcasses and maiden gilts. Economic values facilitate genetic improvement by translating it to proportionate profitability. Breeders should, however, continually recalculate economic values to place the most appropriate emphases on the respective traits during genetic selection.
[Neuroimaging and Blood Biomarkers in Functional Prognosis after Stroke].
Branco, João Paulo; Costa, Joana Santos; Sargento-Freitas, João; Oliveira, Sandra; Mendes, Bruno; Laíns, Jorge; Pinheiro, João
2016-11-01
Stroke remains one of the leading causes of morbidity and mortality around the world and it is associated with an important long-term functional disability. Some neuroimaging resources and certain peripheral blood or cerebrospinal fluid proteins can give important information about etiology, therapeutic approach, follow-up and functional prognosis in acute ischemic stroke patients. However, among the scientific community, there is currently more interest in the stroke vital prognosis over the functional prognosis. Predicting the functional prognosis during acute phase would allow more objective rehabilitation programs and better management of the available resources. The aim of this work is to review the potential role of acute phase neuroimaging and blood biomarkers as functional recovery predictors after ischemic stroke. Review of the literature published between 2005 and 2015, in English, using the terms "ischemic stroke", "neuroimaging" e "blood biomarkers". We included nine studies, based on abstract reading. Computerized tomography, transcranial doppler ultrasound and diffuse magnetic resonance imaging show potential predictive value, based on the blood flow study and the evaluation of stroke's volume and localization, especially when combined with the National Institutes of Health Stroke Scale. Several biomarkers have been studied as diagnostic, risk stratification and prognostic tools, namely the S100 calcium binding protein B, C-reactive protein, matrix metalloproteinases and cerebral natriuretic peptide. Although some biomarkers and neuroimaging techniques have potential predictive value, none of the studies were able to support its use, alone or in association, as a clinically useful functionality predictor model. All the evaluated markers were considered insufficient to predict functional prognosis at three months, when applied in the first hours after stroke. Additional studies are necessary to identify reliable predictive markers for functional prognosis after ischemic stroke.
Single-pixel non-imaging object recognition by means of Fourier spectrum acquisition
NASA Astrophysics Data System (ADS)
Chen, Huichao; Shi, Jianhong; Liu, Xialin; Niu, Zhouzhou; Zeng, Guihua
2018-04-01
Single-pixel imaging has emerged over recent years as a novel imaging technique, which has significant application prospects. In this paper, we propose and experimentally demonstrate a scheme that can achieve single-pixel non-imaging object recognition by acquiring the Fourier spectrum. In an experiment, a four-step phase-shifting sinusoid illumination light is used to irradiate the object image, the value of the light intensity is measured with a single-pixel detection unit, and the Fourier coefficients of the object image are obtained by a differential measurement. The Fourier coefficients are first cast into binary numbers to obtain the hash value. We propose a new method of perceptual hashing algorithm, which is combined with a discrete Fourier transform to calculate the hash value. The hash distance is obtained by calculating the difference of the hash value between the object image and the contrast images. By setting an appropriate threshold, the object image can be quickly and accurately recognized. The proposed scheme realizes single-pixel non-imaging perceptual hashing object recognition by using fewer measurements. Our result might open a new path for realizing object recognition with non-imaging.
Pauli, Carla; de Oliveira Thais, Maria Emilia Rodrigues; Guarnieri, Ricardo; Schwarzbold, Marcelo Liborio; Diaz, Alexandre Paim; Ben, Juliana; Linhares, Marcelo Neves; Markowitsch, Hans Joachim; Wolf, Peter; Wiebe, Samuel; Lin, Katia; Walz, Roger
2017-10-01
The purpose of this study was to investigate the following: i) the objective impairment in neuropsychological tests that were associated with the subjective perception of cognitive function decline in Brazilian patients who underwent mesial temporal lobe epilepsy (MTLE) surgery and ii) the predictive variables for those impaired objective neuropsychological tests. Forty-eight adults with MTLE (27 right HS and 23 male) were divided according to their perception of changes (Decline or No-decline) of cognitive function domain of the QOLIE-31 questionnaire applied before and 1year after the ATL. The mean (SD) of changes in the raw score difference of the neuropsychological tests before and after the ATL was compared between Decline and No-decline groups. Receiver Operating Characteristic curves, sensitivity, specificity, and predictive values were used to assess the optimum cutoff points of neuropsychological test score changes to predict patient-reported subjective cognitive decline. Six (12.5%) patients reported a perception of cognitive function decline after ATL. Among the 25 cognitive tests analyzed, only changes in the Boston Naming Test (BNT) were associated with subjective cognitive decline reported by patients. A reduction of ≥8 points in the raw score of BNT after surgery had 91% of sensitivity and 45% specificity for predicting subjective perception of cognitive function decline by the patient. Left side surgery and age older than 40years were more associated with an important BNT reduction with overall accuracy of 91.7%, 95% predictive ability for no impairment, and 75% for impairment of cognitive function. Impairment in word-finding seems to be the objective cognitive finding most relevant to Brazilian patients after mesial temporal lobe epilepsy surgery. Similar to American patients, the side of surgery and age are good predictors for no decline in the BNT, but shows a lower accuracy to predict its decline. If replicated in other populations, the results may have wider implications for the surgical management of patients with drug-resistant MTLE. Copyright © 2017 Elsevier Inc. All rights reserved.
Diabetes, peripheral neuropathy, and lower-extremity function.
Chiles, Nancy S; Phillips, Caroline L; Volpato, Stefano; Bandinelli, Stefania; Ferrucci, Luigi; Guralnik, Jack M; Patel, Kushang V
2014-01-01
Diabetes among older adults causes many complications, including decreased lower-extremity function and physical disability. Diabetes can cause peripheral nerve dysfunction, which might be one pathway through which diabetes leads to decreased physical function. The study aims were to determine the following: (1) whether diabetes and impaired fasting glucose are associated with objective measures of physical function in older adults, (2) which peripheral nerve function (PNF) tests are associated with diabetes, and (3) whether PNF mediates the diabetes-physical function relationship. This study included 983 participants, age 65 years and older from the InCHIANTI study. Diabetes was diagnosed by clinical guidelines. Physical performance was assessed using the Short Physical Performance Battery (SPPB), scored from 0 to 12 (higher values, better physical function) and usual walking speed (m/s). PNF was assessed via standard surface electroneurographic study of right peroneal nerve conduction velocity, vibration and touch sensitivity. Clinical cutpoints of PNF tests were used to create a neuropathy score from 0 to 5 (higher values, greater neuropathy). Multiple linear regression models were used to test associations. One hundred twenty-six (12.8%) participants had diabetes. Adjusting for age, sex, education, and other confounders, diabetic participants had decreased SPPB (β=-0.99; p<0.01), decreased walking speed (β=-0.1m/s; p<0.01), decreased nerve conduction velocity (β=-1.7m/s; p<0.01), and increased neuropathy (β=0.25; p<0.01) compared to non-diabetic participants. Adjusting for nerve conduction velocity and neuropathy score decreased the effect of diabetes on SPPB by 20%, suggesting partial mediation through decreased PNF. © 2014.