Sample records for management algorithm based

  1. Information-based management mode based on value network analysis for livestock enterprises

    NASA Astrophysics Data System (ADS)

    Liu, Haoqi; Lee, Changhoon; Han, Mingming; Su, Zhongbin; Padigala, Varshinee Anu; Shen, Weizheng

    2018-01-01

    With the development of computer and IT technologies, enterprise management has gradually become information-based management. Moreover, due to poor technical competence and non-uniform management, most breeding enterprises show a lack of organisation in data collection and management. In addition, low levels of efficiency result in increasing production costs. This paper adopts 'struts2' in order to construct an information-based management system for standardised and normalised management within the process of production in beef cattle breeding enterprises. We present a radio-frequency identification system by studying multiple-tag anti-collision via a dynamic grouping ALOHA algorithm. This algorithm is based on the existing ALOHA algorithm and uses an improved packet dynamic of this algorithm, which is characterised by a high-throughput rate. This new algorithm can reach a throughput 42% higher than that of the general ALOHA algorithm. With a change in the number of tags, the system throughput is relatively stable.

  2. A pragmatic evidence-based clinical management algorithm for burning mouth syndrome.

    PubMed

    Kim, Yohanan; Yoo, Timothy; Han, Peter; Liu, Yuan; Inman, Jared C

    2018-04-01

    Burning mouth syndrome is a poorly understood disease process with no current standard of treatment. The goal of this article is to provide an evidence-based, practical, clinical algorithm as a guideline for the treatment of burning mouth syndrome. Using available evidence and clinical experience, a multi-step management algorithm was developed. A retrospective cohort study was then performed, following STROBE statement guidelines, comparing outcomes of patients who were managed using the algorithm and those who were managed without. Forty-seven patients were included in the study, with 21 (45%) managed using the algorithm and 26 (55%) managed without. The mean age overall was 60.4 ±16.5 years, and most patients (39, 83%) were female. Cohorts showed no statistical difference in age, sex, overall follow-up time, dysgeusia, geographic tongue, or psychiatric disorder; xerostomia, however, was significantly different, skewed toward the algorithm group. Significantly more non-algorithm patients did not continue care (69% vs. 29%, p =0.001). The odds ratio of not continuing care for the non-algorithm group compared to the algorithm group was 5.6 [1.6, 19.8]. Improvement in pain was significantly more likely in the algorithm group ( p =0.001), with an odds ratio of 27.5 [3.1, 242.0]. We present a basic clinical management algorithm for burning mouth syndrome which may increase the likelihood of pain improvement and patient follow-up. Key words: Burning mouth syndrome, burning tongue, glossodynia, oral pain, oral burning, therapy, treatment.

  3. An evaluation and implementation of rule-based Home Energy Management System using the Rete algorithm.

    PubMed

    Kawakami, Tomoya; Fujita, Naotaka; Yoshihisa, Tomoki; Tsukamoto, Masahiko

    2014-01-01

    In recent years, sensors become popular and Home Energy Management System (HEMS) takes an important role in saving energy without decrease in QoL (Quality of Life). Currently, many rule-based HEMSs have been proposed and almost all of them assume "IF-THEN" rules. The Rete algorithm is a typical pattern matching algorithm for IF-THEN rules. Currently, we have proposed a rule-based Home Energy Management System (HEMS) using the Rete algorithm. In the proposed system, rules for managing energy are processed by smart taps in network, and the loads for processing rules and collecting data are distributed to smart taps. In addition, the number of processes and collecting data are reduced by processing rules based on the Rete algorithm. In this paper, we evaluated the proposed system by simulation. In the simulation environment, rules are processed by a smart tap that relates to the action part of each rule. In addition, we implemented the proposed system as HEMS using smart taps.

  4. A pragmatic evidence-based clinical management algorithm for burning mouth syndrome

    PubMed Central

    Yoo, Timothy; Han, Peter; Liu, Yuan; Inman, Jared C.

    2018-01-01

    Background Burning mouth syndrome is a poorly understood disease process with no current standard of treatment. The goal of this article is to provide an evidence-based, practical, clinical algorithm as a guideline for the treatment of burning mouth syndrome. Material and Methods Using available evidence and clinical experience, a multi-step management algorithm was developed. A retrospective cohort study was then performed, following STROBE statement guidelines, comparing outcomes of patients who were managed using the algorithm and those who were managed without. Results Forty-seven patients were included in the study, with 21 (45%) managed using the algorithm and 26 (55%) managed without. The mean age overall was 60.4 ±16.5 years, and most patients (39, 83%) were female. Cohorts showed no statistical difference in age, sex, overall follow-up time, dysgeusia, geographic tongue, or psychiatric disorder; xerostomia, however, was significantly different, skewed toward the algorithm group. Significantly more non-algorithm patients did not continue care (69% vs. 29%, p=0.001). The odds ratio of not continuing care for the non-algorithm group compared to the algorithm group was 5.6 [1.6, 19.8]. Improvement in pain was significantly more likely in the algorithm group (p=0.001), with an odds ratio of 27.5 [3.1, 242.0]. Conclusions We present a basic clinical management algorithm for burning mouth syndrome which may increase the likelihood of pain improvement and patient follow-up. Key words:Burning mouth syndrome, burning tongue, glossodynia, oral pain, oral burning, therapy, treatment. PMID:29750091

  5. A novel symbiotic organisms search algorithm for congestion management in deregulated environment

    NASA Astrophysics Data System (ADS)

    Verma, Sumit; Saha, Subhodip; Mukherjee, V.

    2017-01-01

    In today's competitive electricity market, managing transmission congestion in deregulated power system has created challenges for independent system operators to operate the transmission lines reliably within the limits. This paper proposes a new meta-heuristic algorithm, called as symbiotic organisms search (SOS) algorithm, for congestion management (CM) problem in pool based electricity market by real power rescheduling of generators. Inspired by interactions among organisms in ecosystem, SOS algorithm is a recent population based algorithm which does not require any algorithm specific control parameters unlike other algorithms. Various security constraints such as load bus voltage and line loading are taken into account while dealing with the CM problem. In this paper, the proposed SOS algorithm is applied on modified IEEE 30- and 57-bus test power system for the solution of CM problem. The results, thus, obtained are compared to those reported in the recent state-of-the-art literature. The efficacy of the proposed SOS algorithm for obtaining the higher quality solution is also established.

  6. A novel symbiotic organisms search algorithm for congestion management in deregulated environment

    NASA Astrophysics Data System (ADS)

    Verma, Sumit; Saha, Subhodip; Mukherjee, V.

    2017-01-01

    In today's competitive electricity market, managing transmission congestion in deregulated power system has created challenges for independent system operators to operate the transmission lines reliably within the limits. This paper proposes a new meta-heuristic algorithm, called as symbiotic organisms search (SOS) algorithm, for congestion management (CM) problem in pool-based electricity market by real power rescheduling of generators. Inspired by interactions among organisms in ecosystem, SOS algorithm is a recent population-based algorithm which does not require any algorithm specific control parameters unlike other algorithms. Various security constraints such as load bus voltage and line loading are taken into account while dealing with the CM problem. In this paper, the proposed SOS algorithm is applied on modified IEEE 30- and 57-bus test power system for the solution of CM problem. The results, thus, obtained are compared to those reported in the recent state-of-the-art literature. The efficacy of the proposed SOS algorithm for obtaining the higher quality solution is also established.

  7. A controllable sensor management algorithm capable of learning

    NASA Astrophysics Data System (ADS)

    Osadciw, Lisa A.; Veeramacheneni, Kalyan K.

    2005-03-01

    Sensor management technology progress is challenged by the geographic space it spans, the heterogeneity of the sensors, and the real-time timeframes within which plans controlling the assets are executed. This paper presents a new sensor management paradigm and demonstrates its application in a sensor management algorithm designed for a biometric access control system. This approach consists of an artificial intelligence (AI) algorithm focused on uncertainty measures, which makes the high level decisions to reduce uncertainties and interfaces with the user, integrated cohesively with a bottom up evolutionary algorithm, which optimizes the sensor network"s operation as determined by the AI algorithm. The sensor management algorithm presented is composed of a Bayesian network, the AI algorithm component, and a swarm optimization algorithm, the evolutionary algorithm. Thus, the algorithm can change its own performance goals in real-time and will modify its own decisions based on observed measures within the sensor network. The definition of the measures as well as the Bayesian network determine the robustness of the algorithm and its utility in reacting dynamically to changes in the global system.

  8. A smartphone-based pain management app for adolescents with cancer: establishing system requirements and a pain care algorithm based on literature review, interviews, and consensus.

    PubMed

    Jibb, Lindsay A; Stevens, Bonnie J; Nathan, Paul C; Seto, Emily; Cafazzo, Joseph A; Stinson, Jennifer N

    2014-03-19

    Pain that occurs both within and outside of the hospital setting is a common and distressing problem for adolescents with cancer. The use of smartphone technology may facilitate rapid, in-the-moment pain support for this population. To ensure the best possible pain management advice is given, evidence-based and expert-vetted care algorithms and system design features, which are designed using user-centered methods, are required. To develop the decision algorithm and system requirements that will inform the pain management advice provided by a real-time smartphone-based pain management app for adolescents with cancer. A systematic approach to algorithm development and system design was utilized. Initially, a comprehensive literature review was undertaken to understand the current body of knowledge pertaining to pediatric cancer pain management. A user-centered approach to development was used as the results of the review were disseminated to 15 international experts (clinicians, scientists, and a consumer) in pediatric pain, pediatric oncology and mHealth design, who participated in a 2-day consensus conference. This conference used nominal group technique to develop consensus on important pain inputs, pain management advice, and system design requirements. Using data generated at the conference, a prototype algorithm was developed. Iterative qualitative testing was conducted with adolescents with cancer, as well as pediatric oncology and pain health care providers to vet and refine the developed algorithm and system requirements for the real-time smartphone app. The systematic literature review established the current state of research related to nonpharmacological pediatric cancer pain management. The 2-day consensus conference established which clinically important pain inputs by adolescents would require action (pain management advice) from the app, the appropriate advice the app should provide to adolescents in pain, and the functional requirements of the app. These results were used to build a detailed prototype algorithm capable of providing adolescents with pain management support based on their individual pain. Analysis of qualitative interviews with 9 multidisciplinary health care professionals and 10 adolescents resulted in 4 themes that helped to adapt the algorithm and requirements to the needs of adolescents. Specifically, themes were overall endorsement of the system, the need for a clinical expert, the need to individualize the system, and changes to the algorithm to improve potential clinical effectiveness. This study used a phased and user-centered approach to develop a pain management algorithm for adolescents with cancer and the system requirements of an associated app. The smartphone software is currently being created and subsequent work will focus on the usability, feasibility, and effectiveness testing of the app for adolescents with cancer pain.

  9. A Smartphone-Based Pain Management App for Adolescents With Cancer: Establishing System Requirements and a Pain Care Algorithm Based on Literature Review, Interviews, and Consensus

    PubMed Central

    Stevens, Bonnie J; Nathan, Paul C; Seto, Emily; Cafazzo, Joseph A; Stinson, Jennifer N

    2014-01-01

    Background Pain that occurs both within and outside of the hospital setting is a common and distressing problem for adolescents with cancer. The use of smartphone technology may facilitate rapid, in-the-moment pain support for this population. To ensure the best possible pain management advice is given, evidence-based and expert-vetted care algorithms and system design features, which are designed using user-centered methods, are required. Objective To develop the decision algorithm and system requirements that will inform the pain management advice provided by a real-time smartphone-based pain management app for adolescents with cancer. Methods A systematic approach to algorithm development and system design was utilized. Initially, a comprehensive literature review was undertaken to understand the current body of knowledge pertaining to pediatric cancer pain management. A user-centered approach to development was used as the results of the review were disseminated to 15 international experts (clinicians, scientists, and a consumer) in pediatric pain, pediatric oncology and mHealth design, who participated in a 2-day consensus conference. This conference used nominal group technique to develop consensus on important pain inputs, pain management advice, and system design requirements. Using data generated at the conference, a prototype algorithm was developed. Iterative qualitative testing was conducted with adolescents with cancer, as well as pediatric oncology and pain health care providers to vet and refine the developed algorithm and system requirements for the real-time smartphone app. Results The systematic literature review established the current state of research related to nonpharmacological pediatric cancer pain management. The 2-day consensus conference established which clinically important pain inputs by adolescents would require action (pain management advice) from the app, the appropriate advice the app should provide to adolescents in pain, and the functional requirements of the app. These results were used to build a detailed prototype algorithm capable of providing adolescents with pain management support based on their individual pain. Analysis of qualitative interviews with 9 multidisciplinary health care professionals and 10 adolescents resulted in 4 themes that helped to adapt the algorithm and requirements to the needs of adolescents. Specifically, themes were overall endorsement of the system, the need for a clinical expert, the need to individualize the system, and changes to the algorithm to improve potential clinical effectiveness. Conclusions This study used a phased and user-centered approach to develop a pain management algorithm for adolescents with cancer and the system requirements of an associated app. The smartphone software is currently being created and subsequent work will focus on the usability, feasibility, and effectiveness testing of the app for adolescents with cancer pain. PMID:24646454

  10. An international consensus algorithm for management of chronic postoperative inguinal pain.

    PubMed

    Lange, J F M; Kaufmann, R; Wijsmuller, A R; Pierie, J P E N; Ploeg, R J; Chen, D C; Amid, P K

    2015-02-01

    Tension-free mesh repair of inguinal hernia has led to uniformly low recurrence rates. Morbidity associated with this operation is mainly related to chronic pain. No consensus guidelines exist for the management of this condition. The goal of this study is to design an expert-based algorithm for diagnostic and therapeutic management of chronic inguinal postoperative pain (CPIP). A group of surgeons considered experts on inguinal hernia surgery was solicited to develop the algorithm. Consensus regarding each step of an algorithm proposed by the authors was sought by means of the Delphi method leading to a revised expert-based algorithm. With the input of 28 international experts, an algorithm for a stepwise approach for management of CPIP was created. 26 participants accepted the final algorithm as a consensus model. One participant could not agree with the final concept. One expert did not respond during the final phase. There is a need for guidelines with regard to management of CPIP. This algorithm can serve as a guide with regard to the diagnosis, management, and treatment of these patients and improve clinical outcomes. If an expectative phase of a few months has passed without any amelioration of CPIP, a multidisciplinary approach is indicated and a pain management team should be consulted. Pharmacologic, behavioral, and interventional modalities including nerve blocks are essential. If conservative measures fail and surgery is considered, triple neurectomy, correction for recurrence with or without neurectomy, and meshoma removal if indicated should be performed. Surgeons less experienced with remedial operations for CPIP should not hesitate to refer their patients to dedicated hernia surgeons.

  11. Preliminary test results of a flight management algorithm for fuel conservative descents in a time based metered traffic environment. [flight tests of an algorithm to minimize fuel consumption of aircraft based on flight time

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1979-01-01

    A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.

  12. Software Management Environment (SME): Components and algorithms

    NASA Technical Reports Server (NTRS)

    Hendrick, Robert; Kistler, David; Valett, Jon

    1994-01-01

    This document presents the components and algorithms of the Software Management Environment (SME), a management tool developed for the Software Engineering Branch (Code 552) of the Flight Dynamics Division (FDD) of the Goddard Space Flight Center (GSFC). The SME provides an integrated set of visually oriented experienced-based tools that can assist software development managers in managing and planning software development projects. This document describes and illustrates the analysis functions that underlie the SME's project monitoring, estimation, and planning tools. 'SME Components and Algorithms' is a companion reference to 'SME Concepts and Architecture' and 'Software Engineering Laboratory (SEL) Relationships, Models, and Management Rules.'

  13. An evidence-based algorithm for the management of common peroneal nerve injury associated with traumatic knee dislocation

    PubMed Central

    Samson, Deepak; Ng, Chye Yew; Power, Dominic

    2016-01-01

    Traumatic knee dislocation is a complex ligamentous injury that may be associated with simultaneous vascular and neurological injury. Although orthopaedic surgeons may consider CPN exploration at the time of ligament reconstruction, there is no standardised approach to the management of this complex and debilitating complication. This review focusses on published evidence of the outcomes of common peroneal nerve (CPN) injuries associated with knee dislocation, and proposes an algorithm for the management. Cite this article: Deepak Samson, Chye Yew Ng, Dominic Power. An evidence-based algorithm for the management of common peroneal nerve injury associated with traumatic knee dislocation. EFORT Open Rev 2016;1:362-367. DOI: 10.1302/2058-5241.160012. PMID:28461914

  14. VHBuild.com: A Web-Based System for Managing Knowledge in Projects.

    ERIC Educational Resources Information Center

    Li, Heng; Tang, Sandy; Man, K. F.; Love, Peter E. D.

    2002-01-01

    Describes an intelligent Web-based construction project management system called VHBuild.com which integrates project management, knowledge management, and artificial intelligence technologies. Highlights include an information flow model; time-cost optimization based on genetic algorithms; rule-based drawing interpretation; and a case-based…

  15. A duality theorem-based algorithm for inexact quadratic programming problems: Application to waste management under uncertainty

    NASA Astrophysics Data System (ADS)

    Kong, X. M.; Huang, G. H.; Fan, Y. R.; Li, Y. P.

    2016-04-01

    In this study, a duality theorem-based algorithm (DTA) for inexact quadratic programming (IQP) is developed for municipal solid waste (MSW) management under uncertainty. It improves upon the existing numerical solution method for IQP problems. The comparison between DTA and derivative algorithm (DAM) shows that the DTA method provides better solutions than DAM with lower computational complexity. It is not necessary to identify the uncertain relationship between the objective function and decision variables, which is required for the solution process of DAM. The developed method is applied to a case study of MSW management and planning. The results indicate that reasonable solutions have been generated for supporting long-term MSW management and planning. They could provide more information as well as enable managers to make better decisions to identify desired MSW management policies in association with minimized cost under uncertainty.

  16. GBA manager: an online tool for querying low-complexity regions in proteins.

    PubMed

    Bandyopadhyay, Nirmalya; Kahveci, Tamer

    2010-01-01

    Abstract We developed GBA Manager, an online software that facilitates the Graph-Based Algorithm (GBA) we proposed in our earlier work. GBA identifies the low-complexity regions (LCR) of protein sequences. GBA exploits a similarity matrix, such as BLOSUM62, to compute the complexity of the subsequences of the input protein sequence. It uses a graph-based algorithm to accurately compute the regions that have low complexities. GBA Manager is a user friendly web-service that enables online querying of protein sequences using GBA. In addition to querying capabilities of the existing GBA algorithm, GBA Manager computes the p-values of the LCR identified. The p-value gives an estimate of the possibility that the region appears by chance. GBA Manager presents the output in three different understandable formats. GBA Manager is freely accessible at http://bioinformatics.cise.ufl.edu/GBA/GBA.htm .

  17. LAWS simulation: Sampling strategies and wind computation algorithms

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D. A.; Wood, S. A.; Houston, S. H.

    1989-01-01

    In general, work has continued on developing and evaluating algorithms designed to manage the Laser Atmospheric Wind Sounder (LAWS) lidar pulses and to compute the horizontal wind vectors from the line-of-sight (LOS) measurements. These efforts fall into three categories: Improvements to the shot management and multi-pair algorithms (SMA/MPA); observing system simulation experiments; and ground-based simulations of LAWS.

  18. Security clustering algorithm based on reputation in hierarchical peer-to-peer network

    NASA Astrophysics Data System (ADS)

    Chen, Mei; Luo, Xin; Wu, Guowen; Tan, Yang; Kita, Kenji

    2013-03-01

    For the security problems of the hierarchical P2P network (HPN), the paper presents a security clustering algorithm based on reputation (CABR). In the algorithm, we take the reputation mechanism for ensuring the security of transaction and use cluster for managing the reputation mechanism. In order to improve security, reduce cost of network brought by management of reputation and enhance stability of cluster, we select reputation, the historical average online time, and the network bandwidth as the basic factors of the comprehensive performance of node. Simulation results showed that the proposed algorithm improved the security, reduced the network overhead, and enhanced stability of cluster.

  19. Flow-rate control for managing communications in tracking and surveillance networks

    NASA Astrophysics Data System (ADS)

    Miller, Scott A.; Chong, Edwin K. P.

    2007-09-01

    This paper describes a primal-dual distributed algorithm for managing communications in a bandwidth-limited sensor network for tracking and surveillance. The algorithm possesses some scale-invariance properties and adaptive gains that make it more practical for applications such as tracking where the conditions change over time. A simulation study comparing this algorithm with a priority-queue-based approach in a network tracking scenario shows significant improvement in the resulting track quality when using flow control to manage communications.

  20. Management of Central Venous Access Device-Associated Skin Impairment: An Evidence-Based Algorithm.

    PubMed

    Broadhurst, Daphne; Moureau, Nancy; Ullman, Amanda J

    Patients relying on central venous access devices (CVADs) for treatment are frequently complex. Many have multiple comorbid conditions, including renal impairment, nutritional deficiencies, hematologic disorders, or cancer. These conditions can impair the skin surrounding the CVAD insertion site, resulting in an increased likelihood of skin damage when standard CVAD management practices are employed. Supported by the World Congress of Vascular Access (WoCoVA), developed an evidence- and consensus-based algorithm to improve CVAD-associated skin impairment (CASI) identification and diagnosis, guide clinical decision-making, and improve clinician confidence in managing CASI. A scoping review of relevant literature surrounding CASI management was undertaken March 2014, and results were distributed to an international advisory panel. A CASI algorithm was developed by an international advisory panel of clinicians with expertise in wounds, vascular access, pediatrics, geriatric care, home care, intensive care, infection control and acute care, using a 2-phase, modified Delphi technique. The algorithm focuses on identification and treatment of skin injury, exit site infection, noninfectious exudate, and skin irritation/contact dermatitis. It comprised 3 domains: assessment, skin protection, and patient comfort. External validation of the algorithm was achieved by prospective pre- and posttest design, using clinical scenarios and self-reported clinician confidence (Likert scale), and incorporating algorithm feasibility and face validity endpoints. The CASI algorithm was found to significantly increase participants' confidence in the assessment and management of skin injury (P = .002), skin irritation/contact dermatitis (P = .001), and noninfectious exudate (P < .01). A majority of participants reported the algorithm as easy to understand (24/25; 96%), containing all necessary information (24/25; 96%). Twenty-four of 25 (96%) stated that they would recommend the tool to guide management of CASI.

  1. Swarming Reconnaissance Using Unmanned Aerial Vehicles in a Parallel Discrete Event Simulation

    DTIC Science & Technology

    2004-03-01

    60 4.3.1.4 Data Distribution Management . . . . . . . . . 60 4.3.1.5 Breathing Time Warp Algorithm/ Rolling Back . 61...58 BTW Breathing Time Warp . . . . . . . . . . . . . . . . . . . . . . . . . 59 DDM Data Distribution Management . . . . . . . . . . . . . . . . . . . . 60...events based on the 58 process algorithm. Data proxies/ distribution management is the vital portion of the SPEEDES im- plementation that allows objects

  2. Crisis management during anaesthesia: hypotension.

    PubMed

    Morris, R W; Watterson, L M; Westhorpe, R N; Webb, R K

    2005-06-01

    Hypotension is commonly encountered in association with anaesthesia and surgery. Uncorrected and sustained it puts the brain, heart, kidneys, and the fetus in pregnancy at risk of permanent or even fatal damage. Its recognition and correction is time critical, especially in patients with pre-existing disease that compromises organ perfusion. To examine the role of a previously described core algorithm "COVER ABCD-A SWIFT CHECK", supplemented by a specific sub-algorithm for hypotension, in the management of hypotension when it occurs in association with anaesthesia. Reports of hypotension during anaesthesia were extracted and studied from the first 4000 incidents reported to the Australian Incident Monitoring Study (AIMS). The potential performance of the COVER ABCD algorithm and the sub-algorithm for hypotension was compared with the actual management as reported by the anaesthetist involved. There were 438 reports that mentioned hypotension, cardiovascular collapse, or cardiac arrest. In 17% of reports more than one cause was attributed and 550 causative events were identified overall. The most common causes identified were drugs (26%), regional anaesthesia (14%), and hypovolaemia (9%). Concomitant changes were reported in heart rate or rhythm in 39% and oxygen saturation or ventilation in 21% of reports. Cardiac arrest was documented in 25% of reports. As hypotension was frequently associated with abnormalities of other vital signs, it could not always be adequately addressed by a single algorithm. The sub-algorithm for hypotension is adequate when hypotension occurs in association with sinus tachycardia. However, when it occurs in association with bradycardia, non-sinus tachycardia, desaturation or signs of anaphylaxis or other problems, the sub-algorithm for hypotension recommends cross referencing to other relevant sub-algorithms. It was considered that, correctly applied, the core algorithm COVER ABCD would have diagnosed 18% of cases and led to resolution in two thirds of these. It was further estimated that completion of this followed by the specific sub-algorithm for hypotension would have led to earlier recognition of the problem and/or better management in 6% of cases compared with actual management reported. Pattern recognition in most cases enables anaesthetists to determine the cause and manage hypotension. However, an algorithm based approach is likely to improve the management of a small proportion of atypical but potentially life threatening cases. While an algorithm based approach will facilitate crisis management, the frequency of co-existing abnormalities in other vital signs means that all cases of hypotension cannot be dealt with using a single algorithm. Diagnosis, in particular, may potentially be assisted by cross referencing to the specific sub-algorithms for these.

  3. Crisis management during anaesthesia: the development of an anaesthetic crisis management manual

    PubMed Central

    Runciman, W; Kluger, M; Morris, R; Paix, A; Watterson, L; Webb, R

    2005-01-01

    Background: All anaesthetists have to handle life threatening crises with little or no warning. However, some cognitive strategies and work practices that are appropriate for speed and efficiency under normal circumstances may become maladaptive in a crisis. It was judged in a previous study that the use of a structured "core" algorithm (based on the mnemonic COVER ABCD–A SWIFT CHECK) would diagnose and correct the problem in 60% of cases and provide a functional diagnosis in virtually all of the remaining 40%. It was recommended that specific sub-algorithms be developed for managing the problems underlying the remaining 40% of crises and assembled in an easy-to-use manual. Sub-algorithms were therefore developed for these problems so that they could be checked for applicability and validity against the first 4000 anaesthesia incidents reported to the Australian Incident Monitoring Study (AIMS). Methods: The need for 24 specific sub-algorithms was identified. Teams of practising anaesthetists were assembled and sets of incidents relevant to each sub-algorithm were identified from the first 4000 reported to AIMS. Based largely on successful strategies identified in these reports, a set of 24 specific sub-algorithms was developed for trial against the 4000 AIMS reports and assembled into an easy-to-use manual. A process was developed for applying each component of the core algorithm COVER at one of four levels (scan-check-alert/ready-emergency) according to the degree of perceived urgency, and incorporated into the manual. The manual was disseminated at a World Congress and feedback was obtained. Results: Each of the 24 specific crisis management sub-algorithms was tested against the relevant incidents among the first 4000 reported to AIMS and compared with the actual management by the anaesthetist at the time. It was judged that, if the core algorithm had been correctly applied, the appropriate sub-algorithm would have been resolved better and/or faster in one in eight of all incidents, and would have been unlikely to have caused harm to any patient. The descriptions of the validation of each of the 24 sub-algorithms constitute the remaining 24 papers in this set. Feedback from five meetings each attended by 60–100 anaesthetists was then collated and is included. Conclusion: The 24 sub-algorithms developed form the basis for developing a rational evidence-based approach to crisis management during anaesthesia. The COVER component has been found to be satisfactory in real life resuscitation situations and the sub-algorithms have been used successfully for several years. It would now be desirable for carefully designed simulator based studies, using naive trainees at the start of their training, to systematically examine the merits and demerits of various aspects of the sub-algorithms. It would seem prudent that these sub-algorithms be regarded, for the moment, as decision aids to support and back up clinicians' natural responses to a crisis when all is not progressing as expected. PMID:15933282

  4. Crisis management during anaesthesia: the development of an anaesthetic crisis management manual.

    PubMed

    Runciman, W B; Kluger, M T; Morris, R W; Paix, A D; Watterson, L M; Webb, R K

    2005-06-01

    All anaesthetists have to handle life threatening crises with little or no warning. However, some cognitive strategies and work practices that are appropriate for speed and efficiency under normal circumstances may become maladaptive in a crisis. It was judged in a previous study that the use of a structured "core" algorithm (based on the mnemonic COVER ABCD-A SWIFT CHECK) would diagnose and correct the problem in 60% of cases and provide a functional diagnosis in virtually all of the remaining 40%. It was recommended that specific sub-algorithms be developed for managing the problems underlying the remaining 40% of crises and assembled in an easy-to-use manual. Sub-algorithms were therefore developed for these problems so that they could be checked for applicability and validity against the first 4000 anaesthesia incidents reported to the Australian Incident Monitoring Study (AIMS). The need for 24 specific sub-algorithms was identified. Teams of practising anaesthetists were assembled and sets of incidents relevant to each sub-algorithm were identified from the first 4000 reported to AIMS. Based largely on successful strategies identified in these reports, a set of 24 specific sub-algorithms was developed for trial against the 4000 AIMS reports and assembled into an easy-to-use manual. A process was developed for applying each component of the core algorithm COVER at one of four levels (scan-check-alert/ready-emergency) according to the degree of perceived urgency, and incorporated into the manual. The manual was disseminated at a World Congress and feedback was obtained. Each of the 24 specific crisis management sub-algorithms was tested against the relevant incidents among the first 4000 reported to AIMS and compared with the actual management by the anaesthetist at the time. It was judged that, if the core algorithm had been correctly applied, the appropriate sub-algorithm would have been resolved better and/or faster in one in eight of all incidents, and would have been unlikely to have caused harm to any patient. The descriptions of the validation of each of the 24 sub-algorithms constitute the remaining 24 papers in this set. Feedback from five meetings each attended by 60-100 anaesthetists was then collated and is included. The 24 sub-algorithms developed form the basis for developing a rational evidence-based approach to crisis management during anaesthesia. The COVER component has been found to be satisfactory in real life resuscitation situations and the sub-algorithms have been used successfully for several years. It would now be desirable for carefully designed simulator based studies, using naive trainees at the start of their training, to systematically examine the merits and demerits of various aspects of the sub-algorithms. It would seem prudent that these sub-algorithms be regarded, for the moment, as decision aids to support and back up clinicians' natural responses to a crisis when all is not progressing as expected.

  5. P2MP MPLS-Based Hierarchical Service Management System

    NASA Astrophysics Data System (ADS)

    Kumaki, Kenji; Nakagawa, Ikuo; Nagami, Kenichi; Ogishi, Tomohiko; Ano, Shigehiro

    This paper proposes a point-to-multipoint (P2MP) Multi-Protocol Label Switching (MPLS) based hierarchical service management system. Traditionally, general management systems deployed in some service providers control MPLS Label Switched Paths (LSPs) (e.g., RSVP-TE and LDP) and services (e.g., L2VPN, L3VPN and IP) separately. In order for dedicated management systems for MPLS LSPs and services to cooperate with each other automatically, a hierarchical service management system has been proposed with the main focus on point-to-point (P2P) TE LSPs in MPLS path management. In the case where P2MP TE LSPs and services are deployed in MPLS networks, the dedicated management systems for P2MP TE LSPs and services must work together automatically. Therefore, this paper proposes a new algorithm that uses a correlation between P2MP TE LSPs and multicast VPN services based on a P2MP MPLS-based hierarchical service management architecture. Also, the capacity and performance of the proposed algorithm are evaluated by simulations, which are actually based on certain real MPLS production networks, and are compared to that of the algorithm for P2P TE LSPs. Results show this system is very scalable within real MPLS production networks. This system, with the automatic correlation, appears to be deployable in real MPLS production networks.

  6. Fuel management optimization using genetic algorithms and expert knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1996-09-01

    The CIGARO fuel management optimization code based on genetic algorithms is described and tested. The test problem optimized the core lifetime for a pressurized water reactor with a penalty function constraint on the peak normalized power. A bit-string genotype encoded the loading patterns, and genotype bias was reduced with additional bits. Expert knowledge about fuel management was incorporated into the genetic algorithm. Regional crossover exchanged physically adjacent fuel assemblies and improved the optimization slightly. Biasing the initial population toward a known priority table significantly improved the optimization.

  7. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    PubMed Central

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.

    2013-01-01

    Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905

  8. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    NASA Astrophysics Data System (ADS)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  9. A Program Complexity Metric Based on Variable Usage for Algorithmic Thinking Education of Novice Learners

    ERIC Educational Resources Information Center

    Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.

    2015-01-01

    We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…

  10. A Clustering Algorithm for Ecological Stream Segment Identification from Spatially Extensive Digital Databases

    NASA Astrophysics Data System (ADS)

    Brenden, T. O.; Clark, R. D.; Wiley, M. J.; Seelbach, P. W.; Wang, L.

    2005-05-01

    Remote sensing and geographic information systems have made it possible to attribute variables for streams at increasingly detailed resolutions (e.g., individual river reaches). Nevertheless, management decisions still must be made at large scales because land and stream managers typically lack sufficient resources to manage on an individual reach basis. Managers thus require a method for identifying stream management units that are ecologically similar and that can be expected to respond similarly to management decisions. We have developed a spatially-constrained clustering algorithm that can merge neighboring river reaches with similar ecological characteristics into larger management units. The clustering algorithm is based on the Cluster Affinity Search Technique (CAST), which was developed for clustering gene expression data. Inputs to the clustering algorithm are the neighbor relationships of the reaches that comprise the digital river network, the ecological attributes of the reaches, and an affinity value, which identifies the minimum similarity for merging river reaches. In this presentation, we describe the clustering algorithm in greater detail and contrast its use with other methods (expert opinion, classification approach, regular clustering) for identifying management units using several Michigan watersheds as a backdrop.

  11. State-Based Implicit Coordination and Applications

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony J.; Munoz, Cesar A.

    2011-01-01

    In air traffic management, pairwise coordination is the ability to achieve separation requirements when conflicting aircraft simultaneously maneuver to solve a conflict. Resolution algorithms are implicitly coordinated if they provide coordinated resolution maneuvers to conflicting aircraft when only surveillance data, e.g., position and velocity vectors, is periodically broadcast by the aircraft. This paper proposes an abstract framework for reasoning about state-based implicit coordination. The framework consists of a formalized mathematical development that enables and simplifies the design and verification of implicitly coordinated state-based resolution algorithms. The use of the framework is illustrated with several examples of algorithms and formal proofs of their coordination properties. The work presented here supports the safety case for a distributed self-separation air traffic management concept where different aircraft may use different conflict resolution algorithms and be assured that separation will be maintained.

  12. EEG-based "serious" games and monitoring tools for pain management.

    PubMed

    Sourina, Olga; Wang, Qiang; Nguyen, Minh Khoa

    2011-01-01

    EEG-based "serious games" for medical applications attracted recently more attention from the research community and industry as wireless EEG reading devices became easily available on the market. EEG-based technology has been applied in anesthesiology, psychology, etc. In this paper, we proposed and developed EEG-based "serious" games and doctor's monitoring tools that could be used for pain management. As EEG signal is considered to have a fractal nature, we proposed and develop a novel spatio-temporal fractal based algorithm for brain state quantification. The algorithm is implemented with blobby visualization tools for patient monitoring and in EEG-based "serious" games. Such games could be used by patient even at home convenience for pain management as an alternative to traditional drug treatment.

  13. Non-Algorithmic Issues in Automated Computational Mechanics

    DTIC Science & Technology

    1991-04-30

    Tworzydlo, Senior Research Engineer and Manager of Advanced Projects Group I. Professor I J. T. Oden, President and Senior Scientist of COMCO, was project...practical applications of the systems reported so far is due to the extremely arduous and complex development and management of a realistic knowledge base...software, designed to effectively implement deep, algorithmic knowledge, * and 0 "intelligent" software, designed to manage shallow, heuristic

  14. Traffic Flow Management Using Aggregate Flow Models and the Development of Disaggregation Methods

    NASA Technical Reports Server (NTRS)

    Sun, Dengfeng; Sridhar, Banavar; Grabbe, Shon

    2010-01-01

    A linear time-varying aggregate traffic flow model can be used to develop Traffic Flow Management (tfm) strategies based on optimization algorithms. However, there are no methods available in the literature to translate these aggregate solutions into actions involving individual aircraft. This paper describes and implements a computationally efficient disaggregation algorithm, which converts an aggregate (flow-based) solution to a flight-specific control action. Numerical results generated by the optimization method and the disaggregation algorithm are presented and illustrated by applying them to generate TFM schedules for a typical day in the U.S. National Airspace System. The results show that the disaggregation algorithm generates control actions for individual flights while keeping the air traffic behavior very close to the optimal solution.

  15. Consensus Guidelines on Evaluation and Management of the Febrile Child Presenting to the Emergency Department in India.

    PubMed

    Mahajan, Prashant; Batra, Prerna; Thakur, Neha; Patel, Reena; Rai, Narendra; Trivedi, Nitin; Fassl, Bernhard; Shah, Binita; Lozon, Marie; Oteng, Rockerfeller A; Saha, Abhijeet; Shah, Dheeraj; Galwankar, Sagar

    2017-08-15

    India, home to almost 1.5 billion people, is in need of a country-specific, evidence-based, consensus approach for the emergency department (ED) evaluation and management of the febrile child. We held two consensus meetings, performed an exhaustive literature review, and held ongoing web-based discussions to arrive at a formal consensus on the proposed evaluation and management algorithm. The first meeting was held in Delhi in October 2015, under the auspices of Pediatric Emergency Medicine (PEM) Section of Academic College of Emergency Experts in India (ACEE-INDIA); and the second meeting was conducted at Pune during Emergency Medical Pediatrics and Recent Trends (EMPART 2016) in March 2016. The second meeting was followed with futher e-mail-based discussions to arrive at a formal consensus on the proposed algorithm. To develop an algorithmic approach for the evaluation and management of the febrile child that can be easily applied in the context of emergency care and modified based on local epidemiology and practice standards. We created an algorithm that can assist the clinician in the evaluation and management of the febrile child presenting to the ED, contextualized to health care in India. This guideline includes the following key components: triage and the timely assessment; evaluation; and patient disposition from the ED. We urge the development and creation of a robust data repository of minimal standard data elements. This would provide a systematic measurement of the care processes and patient outcomes, and a better understanding of various etiologies of febrile illnesses in India; both of which can be used to further modify the proposed approach and algorithm.

  16. Simulation of Automatic Incidents Detection Algorithm on the Transport Network

    ERIC Educational Resources Information Center

    Nikolaev, Andrey B.; Sapego, Yuliya S.; Jakubovich, Anatolij N.; Berner, Leonid I.; Ivakhnenko, Andrey M.

    2016-01-01

    Management of traffic incident is a functional part of the whole approach to solving traffic problems in the framework of intelligent transport systems. Development of an effective process of traffic incident management is an important part of the transport system. In this research, it's suggested algorithm based on fuzzy logic to detect traffic…

  17. Risk management algorithm for rear-side collision avoidance using a combined steering torque overlay and differential braking

    NASA Astrophysics Data System (ADS)

    Lee, Junyung; Yi, Kyongsu; Yoo, Hyunjae; Chong, Hyokjin; Ko, Bongchul

    2015-06-01

    This paper describes a risk management algorithm for rear-side collision avoidance. The proposed risk management algorithm consists of a supervisor and a coordinator. The supervisor is designed to monitor collision risks between the subject vehicle and approaching vehicle in the adjacent lane. An appropriate criterion of intervention, which satisfies high acceptance to drivers through the consideration of a realistic traffic, has been determined based on the analysis of the kinematics of the vehicles in longitudinal and lateral directions. In order to assist the driver actively and increase driver's safety, a coordinator is designed to combine lateral control using a steering torque overlay by motor-driven power steering and differential braking by vehicle stability control. In order to prevent the collision while limiting actuator's control inputs and vehicle dynamics to safe values for the assurance of the driver's comfort, the Lyapunov theory and linear matrix inequalities based optimisation methods have been used. The proposed risk management algorithm has been evaluated via simulation using CarSim and MATLAB/Simulink.

  18. Implementation of an Evidence-Based and Content Validated Standardized Ostomy Algorithm Tool in Home Care: A Quality Improvement Project.

    PubMed

    Bare, Kimberly; Drain, Jerri; Timko-Progar, Monica; Stallings, Bobbie; Smith, Kimberly; Ward, Naomi; Wright, Sandra

    Many nurses have limited experience with ostomy management. We sought to provide a standardized approach to ostomy education and management to support nurses in early identification of stomal and peristomal complications, pouching problems, and provide standardized solutions for managing ostomy care in general while improving utilization of formulary products. This article describes development and testing of an ostomy algorithm tool.

  19. Algorithmic Case Pedagogy, Learning and Gender

    ERIC Educational Resources Information Center

    Bromley, Robert; Huang, Zhenyu

    2015-01-01

    Great investment has been made in developing algorithmically-based cases within online homework management systems. This has been done because publishers are convinced that textbook adoption decisions are influenced by the incorporation of these systems within their products. These algorithmic assignments are thought to promote learning while…

  20. Distributed Pheromone-Based Swarming Control of Unmanned Air and Ground Vehicles for RSTA

    DTIC Science & Technology

    2008-03-20

    Forthcoming in Proceedings of SPIE Defense & Security Conference, March 2008, Orlando, FL Distributed Pheromone -Based Swarming Control of Unmanned...describes recent advances in a fully distributed digital pheromone algorithm that has demonstrated its effectiveness in managing the complexity of...onboard digital pheromone responding to the needs of the automatic target recognition algorithms. UAVs and UGVs controlled by the same pheromone algorithm

  1. A dynamic programming-based particle swarm optimization algorithm for an inventory management problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Zeng, Ziqiang; Han, Bernard; Lei, Xiao

    2013-07-01

    This article presents a dynamic programming-based particle swarm optimization (DP-based PSO) algorithm for solving an inventory management problem for large-scale construction projects under a fuzzy random environment. By taking into account the purchasing behaviour and strategy under rules of international bidding, a multi-objective fuzzy random dynamic programming model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform fuzzy random parameters into fuzzy variables that are subsequently defuzzified by using an expected value operator with optimistic-pessimistic index. The iterative nature of the authors' model motivates them to develop a DP-based PSO algorithm. More specifically, their approach treats the state variables as hidden parameters. This in turn eliminates many redundant feasibility checks during initialization and particle updates at each iteration. Results and sensitivity analysis are presented to highlight the performance of the authors' optimization method, which is very effective as compared to the standard PSO algorithm.

  2. Application of Harmony Search algorithm to the solution of groundwater management models

    NASA Astrophysics Data System (ADS)

    Tamer Ayvaz, M.

    2009-06-01

    This study proposes a groundwater resources management model in which the solution is performed through a combined simulation-optimization model. A modular three-dimensional finite difference groundwater flow model, MODFLOW is used as the simulation model. This model is then combined with a Harmony Search (HS) optimization algorithm which is based on the musical process of searching for a perfect state of harmony. The performance of the proposed HS based management model is tested on three separate groundwater management problems: (i) maximization of total pumping from an aquifer (steady-state); (ii) minimization of the total pumping cost to satisfy the given demand (steady-state); and (iii) minimization of the pumping cost to satisfy the given demand for multiple management periods (transient). The sensitivity of HS algorithm is evaluated by performing a sensitivity analysis which aims to determine the impact of related solution parameters on convergence behavior. The results show that HS yields nearly same or better solutions than the previous solution methods and may be used to solve management problems in groundwater modeling.

  3. Deferred discrimination algorithm (nibbling) for target filter management

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John; Johnson, John L.

    1999-07-01

    A new method of classifying objects is presented. Rather than trying to form the classifier in one step or in one training algorithm, it is done in a series of small steps, or nibbles. This leads to an efficient and versatile system that is trained in series with single one-shot examples but applied in parallel, is implemented with single layer perceptrons, yet maintains its fully sequential hierarchical structure. Based on the nibbling algorithm, a basic new method of target reference filter management is described.

  4. Air Traffic Management Technology Demostration: 1 Research and Procedural Testing of Routes

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.; Kibler, Jennifer L.; Hubbs, Clay E.; Smail, James W.

    2015-01-01

    NASA's Air Traffic Management Technology Demonstration-1 (ATD-1) will operationally demonstrate the feasibility of efficient arrival operations combining ground-based and airborne NASA technologies. The ATD-1 integrated system consists of the Traffic Management Advisor with Terminal Metering which generates precise time-based schedules to the runway and merge points; Controller Managed Spacing decision support tools which provide controllers with speed advisories and other information needed to meet the schedule; and Flight deck-based Interval Management avionics and procedures which allow flight crews to adjust their speed to achieve precise relative spacing. Initial studies identified air-ground challenges related to the integration of these three scheduling and spacing technologies, and NASA's airborne spacing algorithm was modified to address some of these challenges. The Research and Procedural Testing of Routes human-in-the-loop experiment was then conducted to assess the performance of the new spacing algorithm. The results of this experiment indicate that the algorithm performed as designed, and the pilot participants found the airborne spacing concept, air-ground procedures, and crew interface to be acceptable. However, the researchers concluded that the data revealed issues with the frequency of speed changes and speed reversals.

  5. Approaches to drug therapy for COPD in Russia: a proposed therapeutic algorithm.

    PubMed

    Zykov, Kirill A; Ovcharenko, Svetlana I

    2017-01-01

    Until recently, there have been few clinical algorithms for the management of patients with COPD. Current evidence-based clinical management guidelines can appear to be complex, and they lack clear step-by-step instructions. For these reasons, we chose to create a simple and practical clinical algorithm for the management of patients with COPD, which would be applicable to real-world clinical practice, and which was based on clinical symptoms and spirometric parameters that would take into account the pathophysiological heterogeneity of COPD. This optimized algorithm has two main fields, one for nonspecialist treatment by primary care and general physicians and the other for treatment by specialized pulmonologists. Patients with COPD are treated with long-acting bronchodilators and short-acting drugs on a demand basis. If the forced expiratory volume in one second (FEV 1 ) is ≥50% of predicted and symptoms are mild, treatment with a single long-acting muscarinic antagonist or long-acting beta-agonist is proposed. When FEV 1 is <50% of predicted and/or the COPD assessment test score is ≥10, the use of combined bronchodilators is advised. If there is no response to treatment after three months, referral to a pulmonary specialist is recommended for pathophysiological endotyping: 1) eosinophilic endotype with peripheral blood or sputum eosinophilia >3%; 2) neutrophilic endotype with peripheral blood neutrophilia >60% or green sputum; or 3) pauci-granulocytic endotype. It is hoped that this simple, optimized, step-by-step algorithm will help to individualize the treatment of COPD in real-world clinical practice. This algorithm has yet to be evaluated prospectively or by comparison with other COPD management algorithms, including its effects on patient treatment outcomes. However, it is hoped that this algorithm may be useful in daily clinical practice for physicians treating patients with COPD in Russia.

  6. Algorithm of first-aid management of dental trauma for medics and corpsmen.

    PubMed

    Zadik, Yehuda

    2008-12-01

    In order to fill the discrepancy between the necessity of providing prompt and proper treatment to dental trauma patients, and the inadequate knowledge among medics and corpsmen, as well as the lack of instructions in first-aid textbook and manuals, and after reviewing the dental literature, a simple algorithm for non-professional first-aid management for various injuries to hard (teeth) and soft oral tissues, is presented. The recommended management of tooth avulsion, subluxation and luxation, crown fracture and lip, tongue or gingival laceration included in the algorithm. Along with a list of after-hour dental clinics, this symptoms- and clinical-appearance-based algorithm is suited to tuck easily into a pocket for quick utilization by medics/corpsmen in an emergency situation. Although the algorithm was developed for the usage of military non-dental health-care providers, this method could be adjusted and employed in the civilian environment as well.

  7. Method for concurrent execution of primitive operations by dynamically assigning operations based upon computational marked graph and availability of data

    NASA Technical Reports Server (NTRS)

    Mielke, Roland V. (Inventor); Stoughton, John W. (Inventor)

    1990-01-01

    Computationally complex primitive operations of an algorithm are executed concurrently in a plurality of functional units under the control of an assignment manager. The algorithm is preferably defined as a computationally marked graph contianing data status edges (paths) corresponding to each of the data flow edges. The assignment manager assigns primitive operations to the functional units and monitors completion of the primitive operations to determine data availability using the computational marked graph of the algorithm. All data accessing of the primitive operations is performed by the functional units independently of the assignment manager.

  8. An improved harmony search algorithm for emergency inspection scheduling

    NASA Astrophysics Data System (ADS)

    Kallioras, Nikos A.; Lagaros, Nikos D.; Karlaftis, Matthew G.

    2014-11-01

    The ability of nature-inspired search algorithms to efficiently handle combinatorial problems, and their successful implementation in many fields of engineering and applied sciences, have led to the development of new, improved algorithms. In this work, an improved harmony search (IHS) algorithm is presented, while a holistic approach for solving the problem of post-disaster infrastructure management is also proposed. The efficiency of IHS is compared with that of the algorithms of particle swarm optimization, differential evolution, basic harmony search and the pure random search procedure, when solving the districting problem that is the first part of post-disaster infrastructure management. The ant colony optimization algorithm is employed for solving the associated routing problem that constitutes the second part. The comparison is based on the quality of the results obtained, the computational demands and the sensitivity on the algorithmic parameters.

  9. Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.

    PubMed

    Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming

    2016-08-01

    In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.

  10. Research on key technologies for data-interoperability-based metadata, data compression and encryption, and their application

    NASA Astrophysics Data System (ADS)

    Yu, Xu; Shao, Quanqin; Zhu, Yunhai; Deng, Yuejin; Yang, Haijun

    2006-10-01

    With the development of informationization and the separation between data management departments and application departments, spatial data sharing becomes one of the most important objectives for the spatial information infrastructure construction, and spatial metadata management system, data transmission security and data compression are the key technologies to realize spatial data sharing. This paper discusses the key technologies for metadata based on data interoperability, deeply researches the data compression algorithms such as adaptive Huffman algorithm, LZ77 and LZ78 algorithm, studies to apply digital signature technique to encrypt spatial data, which can not only identify the transmitter of spatial data, but also find timely whether the spatial data are sophisticated during the course of network transmission, and based on the analysis of symmetric encryption algorithms including 3DES,AES and asymmetric encryption algorithm - RAS, combining with HASH algorithm, presents a improved mix encryption method for spatial data. Digital signature technology and digital watermarking technology are also discussed. Then, a new solution of spatial data network distribution is put forward, which adopts three-layer architecture. Based on the framework, we give a spatial data network distribution system, which is efficient and safe, and also prove the feasibility and validity of the proposed solution.

  11. Distributed Prognostic Health Management with Gaussian Process Regression

    NASA Technical Reports Server (NTRS)

    Saha, Sankalita; Saha, Bhaskar; Saxena, Abhinav; Goebel, Kai Frank

    2010-01-01

    Distributed prognostics architecture design is an enabling step for efficient implementation of health management systems. A major challenge encountered in such design is formulation of optimal distributed prognostics algorithms. In this paper. we present a distributed GPR based prognostics algorithm whose target platform is a wireless sensor network. In addition to challenges encountered in a distributed implementation, a wireless network poses constraints on communication patterns, thereby making the problem more challenging. The prognostics application that was used to demonstrate our new algorithms is battery prognostics. In order to present trade-offs within different prognostic approaches, we present comparison with the distributed implementation of a particle filter based prognostics for the same battery data.

  12. Design of a TDOA location engine and development of a location system based on chirp spread spectrum.

    PubMed

    Wang, Rui-Rong; Yu, Xiao-Qing; Zheng, Shu-Wang; Ye, Yang

    2016-01-01

    Location based services (LBS) provided by wireless sensor networks have garnered a great deal of attention from researchers and developers in recent years. Chirp spread spectrum (CSS) signaling formatting with time difference of arrival (TDOA) ranging technology is an effective LBS technique in regards to positioning accuracy, cost, and power consumption. The design and implementation of the location engine and location management based on TDOA location algorithms were the focus of this study; as the core of the system, the location engine was designed as a series of location algorithms and smoothing algorithms. To enhance the location accuracy, a Kalman filter algorithm and moving weighted average technique were respectively applied to smooth the TDOA range measurements and location results, which are calculated by the cooperation of a Kalman TDOA algorithm and a Taylor TDOA algorithm. The location management server, the information center of the system, was designed with Data Server and Mclient. To evaluate the performance of the location algorithms and the stability of the system software, we used a Nanotron nanoLOC Development Kit 3.0 to conduct indoor and outdoor location experiments. The results indicated that the location system runs stably with high accuracy at absolute error below 0.6 m.

  13. The GOES-R Product Generation Architecture - Post CDR Update

    NASA Astrophysics Data System (ADS)

    Dittberner, G.; Kalluri, S.; Weiner, A.

    2012-12-01

    The GOES-R system will substantially improve the accuracy of information available to users by providing data from significantly enhanced instruments, which will generate an increased number and diversity of products with higher resolution, and much shorter relook times. Considerably greater compute and memory resources are necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.

  14. The GOES-R Product Generation Architecture

    NASA Astrophysics Data System (ADS)

    Dittberner, G. J.; Kalluri, S.; Hansen, D.; Weiner, A.; Tarpley, A.; Marley, S.

    2011-12-01

    The GOES-R system will substantially improve users' ability to succeed in their work by providing data with significantly enhanced instruments, higher resolution, much shorter relook times, and an increased number and diversity of products. The Product Generation architecture is designed to provide the computer and memory resources necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.

  15. Development of a simple algorithm to guide the effective management of traumatic cardiac arrest.

    PubMed

    Lockey, David J; Lyon, Richard M; Davies, Gareth E

    2013-06-01

    Major trauma is the leading worldwide cause of death in young adults. The mortality from traumatic cardiac arrest remains high but survival with good neurological outcome from cardiopulmonary arrest following major trauma has been regularly reported. Rapid, effective intervention is required to address potential reversible causes of traumatic cardiac arrest if the victim is to survive. Current ILCOR guidelines do not contain a standard algorithm for management of traumatic cardiac arrest. We present a simple algorithm to manage the major trauma patient in actual or imminent cardiac arrest. We reviewed the published English language literature on traumatic cardiac arrest and major trauma management. A treatment algorithm was developed based on this and the experience of treatment of more than a thousand traumatic cardiac arrests by a physician - paramedic pre-hospital trauma service. The algorithm addresses the need treat potential reversible causes of traumatic cardiac arrest. This includes immediate resuscitative thoracotomy in cases of penetrating chest trauma, airway management, optimising oxygenation, correction of hypovolaemia and chest decompression to exclude tension pneumothorax. The requirement to rapidly address a number of potentially reversible pathologies in a short time period lends the management of traumatic cardiac arrest to a simple treatment algorithm. A standardised approach may prevent delay in diagnosis and treatment and improve current poor survival rates. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Trajectory-Oriented Approach to Managing Traffic Complexity: Trajectory Flexibility Metrics and Algorithms and Preliminary Complexity Impact Assessment

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Vivona, Robert A.; Al-Wakil, Tarek

    2009-01-01

    This document describes exploratory research on a distributed, trajectory oriented approach for traffic complexity management. The approach is to manage traffic complexity based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents metrics for trajectory flexibility; a method for estimating these metrics based on discrete time and degree of freedom assumptions; a planning algorithm using these metrics to preserve flexibility; and preliminary experiments testing the impact of preserving trajectory flexibility on traffic complexity. The document also describes an early demonstration capability of the trajectory flexibility preservation function in the NASA Autonomous Operations Planner (AOP) platform.

  17. Increasing Complexity in Rule-Based Clinical Decision Support: The Symptom Assessment and Management Intervention.

    PubMed

    Lobach, David F; Johns, Ellis B; Halpenny, Barbara; Saunders, Toni-Ann; Brzozowski, Jane; Del Fiol, Guilherme; Berry, Donna L; Braun, Ilana M; Finn, Kathleen; Wolfe, Joanne; Abrahm, Janet L; Cooley, Mary E

    2016-11-08

    Management of uncontrolled symptoms is an important component of quality cancer care. Clinical guidelines are available for optimal symptom management, but are not often integrated into the front lines of care. The use of clinical decision support (CDS) at the point-of-care is an innovative way to incorporate guideline-based symptom management into routine cancer care. The objective of this study was to develop and evaluate a rule-based CDS system to enable management of multiple symptoms in lung cancer patients at the point-of-care. This study was conducted in three phases involving a formative evaluation, a system evaluation, and a contextual evaluation of clinical use. In Phase 1, we conducted iterative usability testing of user interface prototypes with patients and health care providers (HCPs) in two thoracic oncology clinics. In Phase 2, we programmed complex algorithms derived from clinical practice guidelines into a rules engine that used Web services to communicate with the end-user application. Unit testing of algorithms was conducted using a stack-traversal tree-spanning methodology to identify all possible permutations of pathways through each algorithm, to validate accuracy. In Phase 3, we evaluated clinical use of the system among patients and HCPs in the two clinics via observations, structured interviews, and questionnaires. In Phase 1, 13 patients and 5 HCPs engaged in two rounds of formative testing, and suggested improvements leading to revisions until overall usability scores met a priori benchmarks. In Phase 2, symptom management algorithms contained between 29 and 1425 decision nodes, resulting in 19 to 3194 unique pathways per algorithm. Unit testing required 240 person-hours, and integration testing required 40 person-hours. In Phase 3, both patients and HCPs found the system usable and acceptable, and offered suggestions for improvements. A rule-based CDS system for complex symptom management was systematically developed and tested. The complexity of the algorithms required extensive development and innovative testing. The Web service-based approach allowed remote access to CDS knowledge, and could enable scaling and sharing of this knowledge to accelerate availability, and reduce duplication of effort. Patients and HCPs found the system to be usable and useful. ©David F Lobach, Ellis B Johns, Barbara Halpenny, Toni-Ann Saunders, Jane Brzozowski, Guilherme Del Fiol, Donna L Berry, Ilana M Braun, Kathleen Finn, Joanne Wolfe, Janet L Abrahm, Mary E Cooley. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 08.11.2016.

  18. Congestion Pricing for Aircraft Pushback Slot Allocation.

    PubMed

    Liu, Lihua; Zhang, Yaping; Liu, Lan; Xing, Zhiwei

    2017-01-01

    In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the "external cost of surface congestion" is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm.

  19. Congestion Pricing for Aircraft Pushback Slot Allocation

    PubMed Central

    Zhang, Yaping

    2017-01-01

    In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the “external cost of surface congestion” is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm. PMID:28114429

  20. Spectral unmixing of agents on surfaces for the Joint Contaminated Surface Detector (JCSD)

    NASA Astrophysics Data System (ADS)

    Slamani, Mohamed-Adel; Chyba, Thomas H.; LaValley, Howard; Emge, Darren

    2007-09-01

    ITT Corporation, Advanced Engineering and Sciences Division, is currently developing the Joint Contaminated Surface Detector (JCSD) technology under an Advanced Concept Technology Demonstration (ACTD) managed jointly by the U.S. Army Research, Development, and Engineering Command (RDECOM) and the Joint Project Manager for Nuclear, Biological, and Chemical Contamination Avoidance for incorporation on the Army's future reconnaissance vehicles. This paper describes the design of the chemical agent identification (ID) algorithm associated with JCSD. The algorithm detects target chemicals mixed with surface and interferent signatures. Simulated data sets were generated from real instrument measurements to support a matrix of parameters based on a Design Of Experiments approach (DOE). Decisions based on receiver operating characteristics (ROC) curves and area-under-the-curve (AUC) measures were used to down-select between several ID algorithms. Results from top performing algorithms were then combined via a fusion approach to converge towards optimum rates of detections and false alarms. This paper describes the process associated with the algorithm design and provides an illustrating example.

  1. Use of an evidence-based algorithm for patients with traumatic hemothorax reduces need for additional interventions.

    PubMed

    Dennis, Bradley M; Gondek, Stephen P; Guyer, Richard A; Hamblin, Susan E; Gunter, Oliver L; Guillamondegui, Oscar D

    2017-04-01

    Concerted management of the traumatic hemothorax is ill-defined. Surgical management of specific hemothoraces may be beneficial. A comprehensive strategy to delineate appropriate patients for additional procedures does not exist. We developed an evidence-based algorithm for hemothorax management. We hypothesize that the use of this algorithm will decrease additional interventions. A pre-/post-study was performed on all patients admitted to our trauma service with traumatic hemothorax from August 2010 to September 2013. An evidence-based management algorithm was initiated for the management of retained hemothoraces. Patients with length of stay (LOS) less than 24 hours or admitted during an implementation phase were excluded. Study data included age, Injury Severity Score, Abbreviated Injury Scale chest, mechanism of injury, ventilator days, intensive care unit (ICU) LOS, total hospital LOS, and interventions required. Our primary outcome was number of patients requiring more than 1 intervention. Secondary outcomes were empyema rate, number of patients requiring specific additional interventions, 28-day ventilator-free days, 28-day ICU-free days, hospital LOS, all-cause 6-month readmission rate. Standard statistical analysis was performed for all data. Six hundred forty-two patients (326 pre and 316 post) met the study criteria. There were no demographic differences in either group. The number of patients requiring more than 1 intervention was significantly reduced (49 pre vs. 28 post, p = 0.02). Number of patients requiring VATS decreased (27 pre vs. 10 post, p < 0.01). Number of catheters placed by interventional radiology increased (2 pre vs. 10 post, p = 0.02). Intrapleural thrombolytic use, open thoracotomy, empyema, and 6-month readmission rates were unchanged. The "post" group more ventilator-free days (median, 23.9 vs. 22.5, p = 0.04), but ICU and hospital LOS were unchanged. Using an evidence-based hemothorax algorithm reduced the number of patients requiring additional interventions without increasing complication rates. Defined criteria for surgical intervention allows for more appropriate utilization of resources. Therapeutic study, level IV.

  2. SOSS User Guide

    NASA Technical Reports Server (NTRS)

    Zhu, Zhifan; Gridnev, Sergei; Windhorst, Robert D.

    2015-01-01

    This User Guide describes SOSS (Surface Operations Simulator and Scheduler) software build and graphic user interface. SOSS is a desktop application that simulates airport surface operations in fast time using traffic management algorithms. It moves aircraft on the airport surface based on information provided by scheduling algorithm prototypes, monitors separation violation and scheduling conformance, and produces scheduling algorithm performance data.

  3. A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems

    NASA Astrophysics Data System (ADS)

    Abtahi, Amir-Reza; Bijari, Afsane

    2017-03-01

    In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.

  4. Western Trauma Association Critical Decisions in Trauma: Management of rib fractures.

    PubMed

    Brasel, Karen J; Moore, Ernest E; Albrecht, Roxie A; deMoya, Marc; Schreiber, Martin; Karmy-Jones, Riyad; Rowell, Susan; Namias, Nicholas; Cohen, Mitchell; Shatz, David V; Biffl, Walter L

    2017-01-01

    This is a recommended management algorithm from the Western Trauma Association addressing the management of adult patients with rib fractures. Because there is a paucity of published prospective randomized clinical trials that have generated Class I data, these recommendations are based primarily on published observational studies and expert opinion of Western Trauma Association members. The algorithm and accompanying comments represent a safe and sensible approach that can be followed at most trauma centers. We recognize that there will be patient, personnel, institutional, and situational factors that may warrant or require deviation from the recommended algorithm. We encourage institutions to use this as a guideline to develop their own local protocols.

  5. A novel fair active queue management algorithm based on traffic delay jitter

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Shun; Yu, Shao-Hua; Dai, Jin-You; Luo, Ting

    2009-11-01

    In order to guarantee the quantity of data traffic delivered in the network, congestion control strategy is adopted. According to the study of many active queue management (AQM) algorithms, this paper proposes a novel active queue management algorithm named JFED. JFED can stabilize queue length at a desirable level by adjusting output traffic rate and adopting a reasonable calculation of packet drop probability based on buffer queue length and traffic jitter; and it support burst packet traffic through the packet delay jitter, so that it can traffic flow medium data. JFED impose effective punishment upon non-responsible flow with a full stateless method. To verify the performance of JFED, it is implemented in NS2 and is compared with RED and CHOKe with respect to different performance metrics. Simulation results show that the proposed JFED algorithm outperforms RED and CHOKe in stabilizing instantaneous queue length and in fairness. It is also shown that JFED enables the link capacity to be fully utilized by stabilizing the queue length at a desirable level, while not incurring excessive packet loss ratio.

  6. Evaluation of Algorithms for a Miles-in-Trail Decision Support Tool

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Hattaway, David; Bambos, Nicholas

    2012-01-01

    Four machine learning algorithms were prototyped and evaluated for use in a proposed decision support tool that would assist air traffic managers as they set Miles-in-Trail restrictions. The tool would display probabilities that each possible Miles-in-Trail value should be used in a given situation. The algorithms were evaluated with an expected Miles-in-Trail cost that assumes traffic managers set restrictions based on the tool-suggested probabilities. Basic Support Vector Machine, random forest, and decision tree algorithms were evaluated, as was a softmax regression algorithm that was modified to explicitly reduce the expected Miles-in-Trail cost. The algorithms were evaluated with data from the summer of 2011 for air traffic flows bound to the Newark Liberty International Airport (EWR) over the ARD, PENNS, and SHAFF fixes. The algorithms were provided with 18 input features that describe the weather at EWR, the runway configuration at EWR, the scheduled traffic demand at EWR and the fixes, and other traffic management initiatives in place at EWR. Features describing other traffic management initiatives at EWR and the weather at EWR achieved relatively high information gain scores, indicating that they are the most useful for estimating Miles-in-Trail. In spite of a high variance or over-fitting problem, the decision tree algorithm achieved the lowest expected Miles-in-Trail costs when the algorithms were evaluated using 10-fold cross validation with the summer 2011 data for these air traffic flows.

  7. Optimizing construction quality management of pavements using mechanistic performance analysis.

    DOT National Transportation Integrated Search

    2004-08-01

    This report presents a statistical-based algorithm that was developed to reconcile the results from several pavement performance models used in the state of practice with systematic process control techniques. These algorithms identify project-specif...

  8. Applicability of an established management algorithm for destructive colon injuries after abbreviated laparotomy: a 17-year experience.

    PubMed

    Sharpe, John P; Magnotti, Louis J; Weinberg, Jordan A; Shahan, Charles P; Cullinan, Darren R; Marino, Katy A; Fabian, Timothy C; Croce, Martin A

    2014-04-01

    For more than a decade, operative decisions (resection plus anastomosis vs diversion) for colon injuries, at our institution, have followed a defined management algorithm based on established risk factors (pre- or intraoperative transfusion requirements of more than 6 units packed RBCs and/or presence of significant comorbid diseases). However, this management algorithm was originally developed for patients managed with a single laparotomy. The purpose of this study was to evaluate the applicability of this algorithm to destructive colon injuries after abbreviated laparotomy (AL) and to determine whether additional risk factors should be considered. Consecutive patients over a 17-year period with colon injuries after AL were identified. Nondestructive injuries were managed with primary repair. Destructive wounds were resected at the initial laparotomy followed by either a staged diversion (SD) or a delayed anastomosis (DA) at the subsequent exploration. Outcomes were evaluated to identify additional risk factors in the setting of AL. We identified 149 patients: 33 (22%) patients underwent primary repair at initial exploration, 42 (28%) underwent DA, and 72 (49%) had SD. Two (1%) patients died before re-exploration. Of those undergoing DA, 23 (55%) patients were managed according to the algorithm and 19 (45%) were not. Adherence to the algorithm resulted in lower rates of suture line failure (4% vs 32%, p = 0.03) and colon-related morbidity (22% vs 58%, p = 0.03) for patients undergoing DA. No additional specific risk factors for suture line failure after DA were identified. Adherence to an established algorithm, originally defined for destructive colon injuries after single laparotomy, is likewise efficacious for the management of these injuries in the setting of AL. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  9. An algorithm for management of deep brain stimulation battery replacements: devising a web-based battery estimator and clinical symptom approach.

    PubMed

    Montuno, Michael A; Kohner, Andrew B; Foote, Kelly D; Okun, Michael S

    2013-01-01

    Deep brain stimulation (DBS) is an effective technique that has been utilized to treat advanced and medication-refractory movement and psychiatric disorders. In order to avoid implanted pulse generator (IPG) failure and consequent adverse symptoms, a better understanding of IPG battery longevity and management is necessary. Existing methods for battery estimation lack the specificity required for clinical incorporation. Technical challenges prevent higher accuracy longevity estimations, and a better approach to managing end of DBS battery life is needed. The literature was reviewed and DBS battery estimators were constructed by the authors and made available on the web at http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator. A clinical algorithm for management of DBS battery life was constructed. The algorithm takes into account battery estimations and clinical symptoms. Existing methods of DBS battery life estimation utilize an interpolation of averaged current drains to calculate how long a battery will last. Unfortunately, this technique can only provide general approximations. There are inherent errors in this technique, and these errors compound with each iteration of the battery estimation. Some of these errors cannot be accounted for in the estimation process, and some of the errors stem from device variation, battery voltage dependence, battery usage, battery chemistry, impedance fluctuations, interpolation error, usage patterns, and self-discharge. We present web-based battery estimators along with an algorithm for clinical management. We discuss the perils of using a battery estimator without taking into account the clinical picture. Future work will be needed to provide more reliable management of implanted device batteries; however, implementation of a clinical algorithm that accounts for both estimated battery life and for patient symptoms should improve the care of DBS patients. © 2012 International Neuromodulation Society.

  10. The String Stability of a Trajectory-Based Interval Management Algorithm in the Midterm Airspace

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.

    2015-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides terminal controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain a precise spacing interval behind a target aircraft. As the percentage of IM equipped aircraft increases, controllers may provide IM clearances to sequences, or strings, of IM-equipped aircraft. It is important for these strings to maintain stable performance. This paper describes an analytic analysis of the string stability of the latest version of NASA's IM algorithm and a fast-time simulation designed to characterize the string performance of the IM algorithm. The analytic analysis showed that the spacing algorithm has stable poles, indicating that a spacing error perturbation will be reduced as a function of string position. The fast-time simulation investigated IM operations at two airports using constraints associated with the midterm airspace, including limited information of the target aircraft's intended speed profile and limited information of the wind forecast on the target aircraft's route. The results of the fast-time simulation demonstrated that the performance of the spacing algorithm is acceptable for strings of moderate length; however, there is some degradation in IM performance as a function of string position.

  11. A novel minimum cost maximum power algorithm for future smart home energy management.

    PubMed

    Singaravelan, A; Kowsalya, M

    2017-11-01

    With the latest development of smart grid technology, the energy management system can be efficiently implemented at consumer premises. In this paper, an energy management system with wireless communication and smart meter are designed for scheduling the electric home appliances efficiently with an aim of reducing the cost and peak demand. For an efficient scheduling scheme, the appliances are classified into two types: uninterruptible and interruptible appliances. The problem formulation was constructed based on the practical constraints that make the proposed algorithm cope up with the real-time situation. The formulated problem was identified as Mixed Integer Linear Programming (MILP) problem, so this problem was solved by a step-wise approach. This paper proposes a novel Minimum Cost Maximum Power (MCMP) algorithm to solve the formulated problem. The proposed algorithm was simulated with input data available in the existing method. For validating the proposed MCMP algorithm, results were compared with the existing method. The compared results prove that the proposed algorithm efficiently reduces the consumer electricity consumption cost and peak demand to optimum level with 100% task completion without sacrificing the consumer comfort.

  12. French national consensus clinical guidelines for the management of ulcerative colitis.

    PubMed

    Peyrin-Biroulet, Laurent; Bouhnik, Yoram; Roblin, Xavier; Bonnaud, Guillaume; Hagège, Hervé; Hébuterne, Xavier

    2016-07-01

    Ulcerative colitis (UC) is a chronic inflammatory bowel disease of multifactorial etiology that primarily affects the colonic mucosa. The disease progresses over time, and clinical management guidelines should reflect its dynamic nature. There is limited evidence supporting UC management in specific clinical situations, thus precluding an evidence-based approach. To use a formal consensus method - the nominal group technique (NGT) - to develop a clinical practice expert opinion to outline simple algorithms and practices, optimize UC management, and assist clinicians in making treatment decisions. The consensus was developed by an expert panel of 37 gastroenterologists from various professional organizations with experience in UC management using the qualitative and iterative NGT, incorporating deliberations based on the European Crohn's and Colitis Organisation recommendations, recent reviews of scientific literature, and pertinent discussion topics developed by a steering committee. Examples of clinical cases for which there are limited evidence-based data from clinical trials were used. Two working groups proposed and voted on treatment algorithms that were then discussed and voted for by the nominal group as a whole, in order to reach a consensus. A clinical practice guideline covering management of the following clinical situations was developed: (i) moderate and severe UC; (ii) acute severe UC; (iii) pouchitis; (iv) refractory proctitis, in the form of treatment algorithms. Given the limited available evidence-based data, a formal consensus methodology was used to develop simple treatment guidelines for UC management in different clinical situations that is now accessible via an online application. Copyright © 2016 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  13. Survivable algorithms and redundancy management in NASA's distributed computing systems

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw

    1992-01-01

    The design of survivable algorithms requires a solid foundation for executing them. While hardware techniques for fault-tolerant computing are relatively well understood, fault-tolerant operating systems, as well as fault-tolerant applications (survivable algorithms), are, by contrast, little understood, and much more work in this field is required. We outline some of our work that contributes to the foundation of ultrareliable operating systems and fault-tolerant algorithm design. We introduce our consensus-based framework for fault-tolerant system design. This is followed by a description of a hierarchical partitioning method for efficient consensus. A scheduler for redundancy management is introduced, and application-specific fault tolerance is described. We give an overview of our hybrid algorithm technique, which is an alternative to the formal approach given.

  14. [Coagulation Monitoring and Bleeding Management in Cardiac Surgery].

    PubMed

    Bein, Berthold; Schiewe, Robert

    2018-05-01

    The transfusion of allogeneic blood products is associated with increased morbidity and mortality. An impaired hemostasis is frequently found in patients undergoing cardiac surgery and may in turn cause bleeding and transfusions. A goal directed coagulation management addressing the often complex coagulation disorders needs sophisticated diagnostics. This may improve both patients' outcome and costs. Recent data suggest that coagulation management based on a rational algorithm is more effective than traditional therapy based on conventional laboratory variables such as PT and INR. Platelet inhibitors, cumarins, direct oral anticoagulants and heparin need different diagnostic and therapeutic approaches. An algorithm specifically developed for use during cardiac surgery is presented. Georg Thieme Verlag KG Stuttgart · New York.

  15. Algorithms for synthesizing management solutions based on OLAP-technologies

    NASA Astrophysics Data System (ADS)

    Pishchukhin, A. M.; Akhmedyanova, G. F.

    2018-05-01

    OLAP technologies are a convenient means of analyzing large amounts of information. An attempt was made in their work to improve the synthesis of optimal management decisions. The developed algorithms allow forecasting the needs and accepted management decisions on the main types of the enterprise resources. Their advantage is the efficiency, based on the simplicity of quadratic functions and differential equations of only the first order. At the same time, the optimal redistribution of resources between different types of products from the assortment of the enterprise is carried out, and the optimal allocation of allocated resources in time. The proposed solutions can be placed on additional specially entered coordinates of the hypercube representing the data warehouse.

  16. Management of Computer-Based Instruction: Design of an Adaptive Control Strategy.

    ERIC Educational Resources Information Center

    Tennyson, Robert D.; Rothen, Wolfgang

    1979-01-01

    Theoretical and research literature on learner, program, and adaptive control as forms of instructional management are critiqued in reference to the design of computer-based instruction. An adaptive control strategy using an online, iterative algorithmic model is proposed. (RAO)

  17. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors.

    PubMed

    Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun

    2016-07-08

    Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction.

  18. Refueling Strategies for a Team of Cooperating AUVs

    DTIC Science & Technology

    2011-01-01

    manager, and thus the constraint a centrally managed underwater network imposes on the mission. Task management utilizing Robust Decentralized Task ...the computational complexity. A bid based approach to task management has also been studied as a possible means of decentralization of group task ...currently performing another task . In [18], ground robots perform distributed task allocation using the ASyMTRy-D algorithm, which is based on CNP

  19. Development and test results of a flight management algorithm for fuel conservative descents in a time-based metered traffic environment

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1980-01-01

    A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.

  20. Multimodality imaging of ovarian cystic lesions: Review with an imaging based algorithmic approach

    PubMed Central

    Wasnik, Ashish P; Menias, Christine O; Platt, Joel F; Lalchandani, Usha R; Bedi, Deepak G; Elsayes, Khaled M

    2013-01-01

    Ovarian cystic masses include a spectrum of benign, borderline and high grade malignant neoplasms. Imaging plays a crucial role in characterization and pretreatment planning of incidentally detected or suspected adnexal masses, as diagnosis of ovarian malignancy at an early stage is correlated with a better prognosis. Knowledge of differential diagnosis, imaging features, management trends and an algorithmic approach of such lesions is important for optimal clinical management. This article illustrates a multi-modality approach in the diagnosis of a spectrum of ovarian cystic masses and also proposes an algorithmic approach for the diagnosis of these lesions. PMID:23671748

  1. Trust-Based Design of Human-Guided Algorithms

    DTIC Science & Technology

    2007-06-01

    Management Interdepartmental Program in Operations Research 17 May, 2007 Approved by: Laura Major Forest The Charles Stark Draper Laboratory...2. Information Analysis: predicting based on data, integrating and managing information, augmenting human operator perception and cognition. 3...allocation of automation by designers and managers . How an operator decides between manual and automatic control of a system is a necessary

  2. An Algorithm-Based Approach for Behavior and Disease Management in Children.

    PubMed

    Meyer, Beau D; Lee, Jessica Y; Thikkurissy, S; Casamassimo, Paul S; Vann, William F

    2018-03-15

    Pharmacologic behavior management for dental treatment is an approach to provide invasive yet compassionate care for young children; it can facilitate the treatment of children who otherwise may not cooperate for traditional in-office care. Some recent highly publicized procedural sedation-related tragedies have drawn attention to risks associated with pharmacologic management. However, it remains widely accepted that, by adhering to proper guidelines, procedural sedation can assist in the provision of high-quality dental care while minimizing morbidity and mortality from the procedure. The purpose of this paper was to propose an algorithm for clinicians to consider when selecting a behavior and disease management strategy for early childhood caries. This algorithm will not ensure a positive outcome but can assist clinicians when counseling caregivers about risks, benefits, and alternatives. It also emphasizes and underscores best-safety practices.

  3. Development of sensor-based nitrogen recommendation algorithms for cereal crops

    NASA Astrophysics Data System (ADS)

    Asebedo, Antonio Ray

    Nitrogen (N) management is one of the most recognizable components of farming both within and outside the world of agriculture. Interest over the past decade has greatly increased in improving N management systems in corn (Zea mays) and winter wheat (Triticum aestivum ) to have high NUE, high yield, and be environmentally sustainable. Nine winter wheat experiments were conducted across seven locations from 2011 through 2013. The objectives of this study were to evaluate the impacts of fall-winter, Feekes 4, Feekes 7, and Feekes 9 N applications on winter wheat grain yield, grain protein, and total grain N uptake. Nitrogen treatments were applied as single or split applications in the fall-winter, and top-dressed in the spring at Feekes 4, Feekes 7, and Feekes 9 with applied N rates ranging from 0 to 134 kg ha-1. Results indicate that Feekes 7 and 9 N applications provide more optimal combinations of grain yield, grain protein levels, and fertilizer N recovered in the grain when compared to comparable rates of N applied in the fall-winter or at Feekes 4. Winter wheat N management studies from 2006 through 2013 were utilized to develop sensor-based N recommendation algorithms for winter wheat in Kansas. Algorithm RosieKat v.2.6 was designed for multiple N application strategies and utilized N reference strips for establishing N response potential. Algorithm NRS v1.5 addressed single top-dress N applications and does not require a N reference strip. In 2013, field validations of both algorithms were conducted at eight locations across Kansas. Results show algorithm RK v2.6 consistently provided highly efficient N recommendations for improving NUE, while achieving high grain yield and grain protein. Without the use of the N reference strip, NRS v1.5 performed statistically equal to the KSU soil test N recommendation in regards to grain yield but with lower applied N rates. Six corn N fertigation experiments were conducted at KSU irrigated experiment fields from 2012 through 2014 to evaluate the previously developed KSU sensor-based N recommendation algorithm in corn N fertigation systems. Results indicate that the current KSU corn algorithm was effective at achieving high yields, but has the tendency to overestimate N requirements. To optimize sensor-based N recommendations for N fertigation systems, algorithms must be specifically designed for these systems to take advantage of their full capabilities, thus allowing implementation of high NUE N management systems.

  4. Guideline for the management of terminal haemorrhage in palliative care patients with advanced cancer discharged home for end-of-life care.

    PubMed

    Ubogagu, Edith; Harris, Dylan G

    2012-12-01

    Terminal haemorrhage is a rare and distressing emergency in palliative oncology. We present an algorithm for the management of terminal haemorrhage in patients likely to receive end-of-life care at home, based on a literature review of the management of terminal haemorrhage for patients with advanced cancer, where a DNAR (do not attempt resuscitation) order is in place and the patient wishes to die at home. A literature review was conducted to identify literature on the management of terminal haemorrhage in patients with advanced cancer who are no longer amenable to active interventional/invasive procedures. Electronic databases, the grey literature, local guidelines from hospitals and hospices, and online web portals were all searched systematically. The literature review was used to formulate a management algorithm. The evidence base is very limited. A three-step practical algorithm is suggested: preparing for the event, managing the event ('ABC') and 'aftercare'. Step 1 involves the identification and optimisation of risk factors. Step 2 (the event) consists of A (assure and re-assure the patient), B (be there - above all stay with the patient) and C (comfort, calm, consider dark towels and anxiolytics if possible). Step 3 (the aftercare) involves the provision of practical and psychological support to those involved including relatives and professionals. Terminal haemorrhage is a rare yet highly feared complication of advanced cancer, for which there is a limited evidence base to guide management. The suggested three-step approach to managing this situation gives professionals a logical framework within which to work.

  5. Fuzzy Algorithm for the Detection of Incidents in the Transport System

    ERIC Educational Resources Information Center

    Nikolaev, Andrey B.; Sapego, Yuliya S.; Jakubovich, Anatolij N.; Berner, Leonid I.; Stroganov, Victor Yu.

    2016-01-01

    In the paper it's proposed an algorithm for the management of traffic incidents, aimed at minimizing the impact of incidents on the road traffic in general. The proposed algorithm is based on the theory of fuzzy sets and provides identification of accidents, as well as the adoption of appropriate measures to address them as soon as possible. A…

  6. Assessment of utility side financial benefits of demand side management considering environmental impacts

    NASA Astrophysics Data System (ADS)

    Abeygunawardane, Saranga Kumudu

    2018-02-01

    Any electrical utility prefers to implement demand side management and change the shape of the demand curve in a beneficial manner. This paper aims to assess the financial gains (or losses) to the generating sector through the implementation of demand side management programs. An optimization algorithm is developed to find the optimal generation mix that minimizes the daily total generating cost. This daily total generating cost includes the daily generating cost as well as the environmental damage cost. The proposed optimization algorithm is used to find the daily total generating cost for the base case and for several demand side management programs using the data obtained from the Sri Lankan power system. Results obtained for DSM programs are compared with the results obtained for the base case to assess the financial benefits of demand side management to the generating sector.

  7. Evaluation of the performance of existing non-laboratory based cardiovascular risk assessment algorithms

    PubMed Central

    2013-01-01

    Background The high burden and rising incidence of cardiovascular disease (CVD) in resource constrained countries necessitates implementation of robust and pragmatic primary and secondary prevention strategies. Many current CVD management guidelines recommend absolute cardiovascular (CV) risk assessment as a clinically sound guide to preventive and treatment strategies. Development of non-laboratory based cardiovascular risk assessment algorithms enable absolute risk assessment in resource constrained countries. The objective of this review is to evaluate the performance of existing non-laboratory based CV risk assessment algorithms using the benchmarks for clinically useful CV risk assessment algorithms outlined by Cooney and colleagues. Methods A literature search to identify non-laboratory based risk prediction algorithms was performed in MEDLINE, CINAHL, Ovid Premier Nursing Journals Plus, and PubMed databases. The identified algorithms were evaluated using the benchmarks for clinically useful cardiovascular risk assessment algorithms outlined by Cooney and colleagues. Results Five non-laboratory based CV risk assessment algorithms were identified. The Gaziano and Framingham algorithms met the criteria for appropriateness of statistical methods used to derive the algorithms and endpoints. The Swedish Consultation, Framingham and Gaziano algorithms demonstrated good discrimination in derivation datasets. Only the Gaziano algorithm was externally validated where it had optimal discrimination. The Gaziano and WHO algorithms had chart formats which made them simple and user friendly for clinical application. Conclusion Both the Gaziano and Framingham non-laboratory based algorithms met most of the criteria outlined by Cooney and colleagues. External validation of the algorithms in diverse samples is needed to ascertain their performance and applicability to different populations and to enhance clinicians’ confidence in them. PMID:24373202

  8. Medicaid beneficiaries in california reported less positive experiences when assigned to a managed care plan.

    PubMed

    McDonnell, Diana D; Graham, Carrie L

    2015-03-01

    In 2011 California began transitioning approximately 340,000 seniors and people with disabilities from Medicaid fee-for-service (FFS) to Medicaid managed care plans. When beneficiaries did not actively choose a managed care plan, the state assigned them to one using an algorithm based on their previous FFS primary and specialty care use. When no clear link could be established, beneficiaries were assigned by default to a managed care plan based on weighted randomization. In this article we report the results of a telephone survey of 1,521 seniors and people with disabilities enrolled in Medi-Cal (California Medicaid) and who were recently transitioned to a managed care plan. We found that 48 percent chose their own plan, 11 percent were assigned to a plan by algorithm, and 41 percent were assigned to a plan by default. People in the latter two categories reported being similarly less positive about their experiences compared to beneficiaries who actively chose a plan. Many states in addition to California are implementing mandatory transitions of Medicaid-only beneficiaries to managed care plans. Our results highlight the importance of encouraging beneficiaries to actively choose their health plan; when beneficiaries do not choose, states should employ robust intelligent assignment algorithms. Project HOPE—The People-to-People Health Foundation, Inc.

  9. Medical management of hyperglycaemia in type 2 diabetes mellitus: a consensus algorithm for the initiation and adjustment of therapy: a consensus statement from the American Diabetes Association and the European Association for the Study of Diabetes.

    PubMed

    Nathan, D M; Buse, J B; Davidson, M B; Ferrannini, E; Holman, R R; Sherwin, R; Zinman, B

    2009-01-01

    The consensus algorithm for the medical management of type 2 diabetes was published in August 2006 with the expectation that it would be updated, based on the availability of new interventions and new evidence to establish their clinical role. The authors continue to endorse the principles used to develop the algorithm and its major features. We are sensitive to the risks of changing the algorithm cavalierly or too frequently, without compelling new information. An update to the consensus algorithm published in January 2008 specifically addressed safety issues surrounding the thiazolidinediones. In this revision, we focus on the new classes of medications that now have more clinical data and experience.

  10. Managing Distributed Systems with Smart Subscriptions

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Lee, Diana D.; Swanson, Keith (Technical Monitor)

    2000-01-01

    We describe an event-based, publish-and-subscribe mechanism based on using 'smart subscriptions' to recognize weakly-structured events. We present a hierarchy of subscription languages (propositional, predicate, temporal and agent) and algorithms for efficiently recognizing event matches. This mechanism has been applied to the management of distributed applications.

  11. Optimization-based Approach to Cross-layer Resource Management in Wireless Networked Control Systems

    DTIC Science & Technology

    2013-05-01

    interest from both academia and industry [37], finding applications in un- manned robotic vehicles, automated highways and factories, smart homes and...is stable when the scaler varies slowly. The algorithm is further extended to utilize the slack resource in the network, which leads to the...model . . . . . . . . . . . . . . . . 66 Optimal sampling rate allocation formulation . . . . . 67 Price-based algorithm

  12. An Elementary Algorithm for Autonomous Air Terminal Merging and Interval Management

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2017-01-01

    A central element of air traffic management is the safe merging and spacing of aircraft during the terminal area flight phase. This paper derives and examines an algorithm for the merging and interval managing problem for Standard Terminal Arrival Routes. It describes a factor analysis for performance based on the distribution of arrivals, the operating period of the terminal, and the topology of the arrival routes; then presents results from a performance analysis and from a safety analysis for a realistic topology based on typical routes for a runway at Phoenix International Airport. The heart of the safety analysis is a statistical derivation on how to conduct a safety analysis for a local simulation when the safety requirement is given for the entire airspace.

  13. An approximate dynamic programming approach to resource management in multi-cloud scenarios

    NASA Astrophysics Data System (ADS)

    Pietrabissa, Antonio; Priscoli, Francesco Delli; Di Giorgio, Alessandro; Giuseppi, Alessandro; Panfili, Martina; Suraci, Vincenzo

    2017-03-01

    The programmability and the virtualisation of network resources are crucial to deploy scalable Information and Communications Technology (ICT) services. The increasing demand of cloud services, mainly devoted to the storage and computing, requires a new functional element, the Cloud Management Broker (CMB), aimed at managing multiple cloud resources to meet the customers' requirements and, simultaneously, to optimise their usage. This paper proposes a multi-cloud resource allocation algorithm that manages the resource requests with the aim of maximising the CMB revenue over time. The algorithm is based on Markov decision process modelling and relies on reinforcement learning techniques to find online an approximate solution.

  14. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  15. Algorithms for in-season nutrient management in cereals

    USDA-ARS?s Scientific Manuscript database

    The demand for improved decision making products for cereal production systems has placed added emphasis on using plant sensors in-season, and that incorporate real-time, site specific, growing environments. The objective of this work was to describe validated in-season sensor based algorithms prese...

  16. Investigation of energy management strategies for photovoltaic systems - A predictive control algorithm

    NASA Technical Reports Server (NTRS)

    Cull, R. C.; Eltimsahy, A. H.

    1983-01-01

    The present investigation is concerned with the formulation of energy management strategies for stand-alone photovoltaic (PV) systems, taking into account a basic control algorithm for a possible predictive, (and adaptive) controller. The control system controls the flow of energy in the system according to the amount of energy available, and predicts the appropriate control set-points based on the energy (insolation) available by using an appropriate system model. Aspects of adaptation to the conditions of the system are also considered. Attention is given to a statistical analysis technique, the analysis inputs, the analysis procedure, and details regarding the basic control algorithm.

  17. Operation management of daily economic dispatch using novel hybrid particle swarm optimization and gravitational search algorithm with hybrid mutation strategy

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Huang, Song; Ji, Zhicheng

    2017-07-01

    This paper presents a hybrid particle swarm optimization and gravitational search algorithm based on hybrid mutation strategy (HGSAPSO-M) to optimize economic dispatch (ED) including distributed generations (DGs) considering market-based energy pricing. A daily ED model was formulated and a hybrid mutation strategy was adopted in HGSAPSO-M. The hybrid mutation strategy includes two mutation operators, chaotic mutation, Gaussian mutation. The proposed algorithm was tested on IEEE-33 bus and results show that the approach is effective for this problem.

  18. Development and Implementation of a Hardware In-the-Loop Test Bed for Unmanned Aerial Vehicle Control Algorithms

    NASA Technical Reports Server (NTRS)

    Nyangweso, Emmanuel; Bole, Brian

    2014-01-01

    Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.

  19. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors

    PubMed Central

    Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun

    2016-01-01

    Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction. PMID:27399722

  20. Multi-mode energy management strategy for fuel cell electric vehicles based on driving pattern identification using learning vector quantization neural network algorithm

    NASA Astrophysics Data System (ADS)

    Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong

    2018-06-01

    The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frenkel, G.; Paterson, T.S.; Smith, M.E.

    The Institute for Defense Analyses (IDA) has collected and analyzed information on battle management algorithm technology that is relevant to Battle Management/Command, Control and Communications (BM/C3). This Memorandum Report represents a program plan that will provide the BM/C3 Directorate of the Strategic Defense Initiative Organization (SDIO) with administrative and technical insight into algorithm technology. This program plan focuses on current activity in algorithm development and provides information and analysis to the SDIO to be used in formulating budget requirements for FY 1988 and beyond. Based upon analysis of algorithm requirements and ongoing programs, recommendations have been made for research areasmore » that should be pursued, including both the continuation of current work and the initiation of new tasks. This final report includes all relevant material from interim reports as well as new results.« less

  2. Performance of the "CCS Algorithm" in real world patients.

    PubMed

    LaHaye, Stephen A; Olesen, Jonas B; Lacombe, Shawn P

    2015-06-01

    With the publication of the 2014 Focused Update of the Canadian Cardiovascular Society Guidelines for the Management of Atrial Fibrillation, the Canadian Cardiovascular Society Atrial Fibrillation Guidelines Committee has introduced a new triage and management algorithm; the so-called "CCS Algorithm". The CCS Algorithm is based upon expert opinion of the best available evidence; however, the CCS Algorithm has not yet been validated. Accordingly, the purpose of this study is to evaluate the performance of the CCS Algorithm in a cohort of real world patients. We compared the CCS Algorithm with the European Society of Cardiology (ESC) Algorithm in 172 hospital inpatients who are at risk of stroke due to non-valvular atrial fibrillation in whom anticoagulant therapy was being considered. The CCS Algorithm and the ESC Algorithm were concordant in 170/172 patients (99% of the time). There were two patients (1%) with vascular disease, but no other thromboembolic risk factors, which were classified as requiring oral anticoagulant therapy using the ESC Algorithm, but for whom ASA was recommended by the CCS Algorithm. The CCS Algorithm appears to be unnecessarily complicated in so far as it does not appear to provide any additional discriminatory value above and beyond the use of the ESC Algorithm, and its use could result in under treatment of patients, specifically female patients with vascular disease, whose real risk of stroke has been understated by the Guidelines.

  3. Adaptive mechanism-based congestion control for networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Zhang, Yun; Chen, C. L. Philip

    2013-03-01

    In order to assure the communication quality in network systems with heavy traffic and limited bandwidth, a new ATRED (adaptive thresholds random early detection) congestion control algorithm is proposed for the congestion avoidance and resource management of network systems. Different to the traditional AQM (active queue management) algorithms, the control parameters of ATRED are not configured statically, but dynamically adjusted by the adaptive mechanism. By integrating with the adaptive strategy, ATRED alleviates the tuning difficulty of RED (random early detection) and shows a better control on the queue management, and achieve a more robust performance than RED under varying network conditions. Furthermore, a dynamic transmission control protocol-AQM control system using ATRED controller is introduced for the systematic analysis. It is proved that the stability of the network system can be guaranteed when the adaptive mechanism is finely designed. Simulation studies show the proposed ATRED algorithm achieves a good performance in varying network environments, which is superior to the RED and Gentle-RED algorithm, and providing more reliable service under varying network conditions.

  4. Knowledge discovery through games and game theory

    NASA Astrophysics Data System (ADS)

    Smith, James F., III; Rhyne, Robert D.

    2001-03-01

    A fuzzy logic based expert system has been developed that automatically allocates electronic attack (EA) resources in real-time over many dissimilar platforms. The platforms can be very general, e.g., ships, planes, robots, land based facilities, etc. Potential foes the platforms deal with can also be general. The initial version of the algorithm was optimized using a genetic algorithm employing fitness functions constructed based on expertise. A new approach is being explored that involves embedding the resource manager in a electronic game environment. The game allows a human expert to play against the resource manager in a simulated battlespace with each of the defending platforms being exclusively directed by the fuzzy resource manager and the attacking platforms being controlled by the human expert or operating autonomously under their own logic. This approach automates the data mining problem. The game automatically creates a database reflecting the domain expert's knowledge, it calls a data mining function, a genetic algorithm, for data mining of the database as required. The game allows easy evaluation of the information mined in the second step. The measure of effectiveness (MOE) for re-optimization is discussed. The mined information is extremely valuable as shown through demanding scenarios.

  5. Over 20 years of reaction access systems from MDL: a novel reaction substructure search algorithm.

    PubMed

    Chen, Lingran; Nourse, James G; Christie, Bradley D; Leland, Burton A; Grier, David L

    2002-01-01

    From REACCS, to MDL ISIS/Host Reaction Gateway, and most recently to MDL Relational Chemistry Server, a new product based on Oracle data cartridge technology, MDL's reaction database management and retrieval systems have undergone great changes. The evolution of the system architecture is briefly discussed. The evolution of MDL reaction substructure search (RSS) algorithms is detailed. This article mainly describes a novel RSS algorithm. This algorithm is based on a depth-first search approach and is able to fully and prospectively use reaction specific information, such as reacting center and atom-atom mapping (AAM) information. The new algorithm has been used in the recently released MDL Relational Chemistry Server and allows the user to precisely find reaction instances in databases while minimizing unrelated hits. Finally, the existing and new RSS algorithms are compared with several examples.

  6. Programming Deep Brain Stimulation for Tremor and Dystonia: The Toronto Western Hospital Algorithms.

    PubMed

    Picillo, Marina; Lozano, Andres M; Kou, Nancy; Munhoz, Renato Puppi; Fasano, Alfonso

    2016-01-01

    Deep brain stimulation (DBS) is an effective treatment for essential tremor (ET) and dystonia. After surgery, a number of extensive programming sessions are performed, mainly relying on neurologist's personal experience as no programming guidelines have been provided so far, with the exception of recommendations provided by groups of experts. Finally, fewer information is available for the management of DBS in ET and dystonia compared with Parkinson's disease. Our aim is to review the literature on initial and follow-up DBS programming procedures for ET and dystonia and integrate the results with our current practice at Toronto Western Hospital (TWH) to develop standardized DBS programming protocols. We conducted a literature search of PubMed from inception to July 2014 with the keywords "balance", "bradykinesia", "deep brain stimulation", "dysarthria", "dystonia", "gait disturbances", "initial programming", "loss of benefit", "micrographia", "speech", "speech difficulties" and "tremor". Seventy-six papers were considered for this review. Based on the literature review and our experience at TWH, we refined three algorithms for management of ET, including: (1) initial programming, (2) management of balance and speech issues and (3) loss of stimulation benefit. We also depicted algorithms for the management of dystonia, including: (1) initial programming and (2) management of stimulation-induced hypokinesia (shuffling gait, micrographia and speech impairment). We propose five algorithms tailored to an individualized approach to managing ET and dystonia patients with DBS. We encourage the application of these algorithms to supplement current standards of care in established as well as new DBS centers to test the clinical usefulness of these algorithms in supplementing the current standards of care. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. A DVE Time Management Simulation and Verification Platform Based on Causality Consistency Middleware

    NASA Astrophysics Data System (ADS)

    Zhou, Hangjun; Zhang, Wei; Peng, Yuxing; Li, Sikun

    During the course of designing a time management algorithm for DVEs, the researchers always become inefficiency for the distraction from the realization of the trivial and fundamental details of simulation and verification. Therefore, a platform having realized theses details is desirable. However, this has not been achieved in any published work to our knowledge. In this paper, we are the first to design and realize a DVE time management simulation and verification platform providing exactly the same interfaces as those defined by the HLA Interface Specification. Moreover, our platform is based on a new designed causality consistency middleware and might offer the comparison of three kinds of time management services: CO, RO and TSO. The experimental results show that the implementation of the platform only costs small overhead, and that the efficient performance of it is highly effective for the researchers to merely focus on the improvement of designing algorithms.

  8. Colon Trauma: Evidence-Based Practices.

    PubMed

    Yamamoto, Ryo; Logue, Alicia J; Muir, Mark T

    2018-01-01

    Colon injury is not uncommon and occurs in about a half of patients with penetrating hollow viscus injuries. Despite major advances in the operative management of penetrating colon wounds, there remains discussion regarding the appropriate treatment of destructive colon injuries, with a significant amount of scientific evidence supporting segmental resection with primary anastomosis in most patients without comorbidities or large transfusion requirement. Although literature is sparse concerning the management of blunt colon injuries, some studies have shown operative decision based on an algorithm originally defined for penetrating wounds should be considered in blunt colon injuries. The optimal management of colonic injuries in patients requiring damage control surgery (DCS) also remains controversial. Studies have recently reported that there is no increased risk compared with patients treated without DCS if fascial closure is completed on the first reoperation, or that a management algorithm for penetrating colon wounds is probably efficacious for colon injuries in the setting of DCS as well.

  9. Design of Energy Storage Management System Based on FPGA in Micro-Grid

    NASA Astrophysics Data System (ADS)

    Liang, Yafeng; Wang, Yanping; Han, Dexiao

    2018-01-01

    Energy storage system is the core to maintain the stable operation of smart micro-grid. Aiming at the existing problems of the energy storage management system in the micro-grid such as Low fault tolerance, easy to cause fluctuations in micro-grid, a new intelligent battery management system based on field programmable gate array is proposed : taking advantage of FPGA to combine the battery management system with the intelligent micro-grid control strategy. Finally, aiming at the problem that during estimation of battery charge State by neural network, initialization of weights and thresholds are not accurate leading to large errors in prediction results, the genetic algorithm is proposed to optimize the neural network method, and the experimental simulation is carried out. The experimental results show that the algorithm has high precision and provides guarantee for the stable operation of micro-grid.

  10. A Comparative Study of Interval Management Control Law Capabilities

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Smith, Colin L.; Palmer, Susan O.; Abbott, Terence S.

    2012-01-01

    This paper presents a new tool designed to allow for rapid development and testing of different control algorithms for airborne spacing. This tool, Interval Management Modeling and Spacing Tool (IM MAST), is a fast-time, low-fidelity tool created to model the approach of aircraft to a runway, with a focus on their interactions with each other. Errors can be induced between pairs of aircraft by varying initial positions, winds, speed profiles, and altitude profiles. Results to-date show that only a few of the algorithms tested had poor behavior in the arrival and approach environment. The majority of the algorithms showed only minimal variation in performance under the test conditions. Trajectory-based algorithms showed high susceptibility to wind forecast errors, while performing marginally better than the other algorithms under other conditions. Trajectory-based algorithms have a sizable advantage, however, of being able to perform relative spacing operations between aircraft on different arrival routes and flight profiles without employing ghosting. methods. This comes at the higher cost of substantially increased complexity, however. Additionally, it was shown that earlier initiation of relative spacing operations provided more time for corrections to be made without any significant problems in the spacing operation itself. Initiating spacing farther out, however, would require more of the aircraft to begin spacing before they merge onto a common route.

  11. PTM Along Track Algorithm to Maintain Spacing During Same Direction Pair-Wise Trajectory Management Operations

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.

    2015-01-01

    Pair-wise Trajectory Management (PTM) is a cockpit based delegated responsibility separation standard. When an air traffic service provider gives a PTM clearance to an aircraft and the flight crew accepts the clearance, the flight crew will maintain spacing and separation from a designated aircraft. A PTM along track algorithm will receive state information from the designated aircraft and from the own ship to produce speed guidance for the flight crew to maintain spacing and separation

  12. Hyperammonemic Encephalopathy Associated With Fibrolamellar Hepatocellular Carcinoma: Case Report, Literature Review, and Proposed Treatment Algorithm.

    PubMed

    Chapuy, Claudia I; Sahai, Inderneel; Sharma, Rohit; Zhu, Andrew X; Kozyreva, Olga N

    2016-04-01

    We report a case of a 31-year-old man with metastatic fibrolamellar hepatocellular carcinoma (FLHCC) treated with gemcitabine and oxaliplatin complicated by hyperammonemic encephalopathy biochemically consistent with acquired ornithine transcarbamylase deficiency. Awareness of FLHCC-associated hyperammonemic encephalopathy and a pathophysiology-based management approach can optimize patient outcome and prevent serious complications. A discussion of the management, literature review, and proposed treatment algorithm of this rare metabolic complication are presented. Pathophysiology-guided management of cancer-associated hyperammonemic encephalopathy can improve patient outcome and prevent life-threatening complications. Community and academic oncologists should be aware of this serious metabolic complication of cancer and be familiar with its management. ©AlphaMed Press.

  13. Breast cancer screening in the era of density notification legislation: summary of 2014 Massachusetts experience and suggestion of an evidence-based management algorithm by multi-disciplinary expert panel.

    PubMed

    Freer, Phoebe E; Slanetz, Priscilla J; Haas, Jennifer S; Tung, Nadine M; Hughes, Kevin S; Armstrong, Katrina; Semine, A Alan; Troyan, Susan L; Birdwell, Robyn L

    2015-09-01

    Stemming from breast density notification legislation in Massachusetts effective 2015, we sought to develop a collaborative evidence-based approach to density notification that could be used by practitioners across the state. Our goal was to develop an evidence-based consensus management algorithm to help patients and health care providers follow best practices to implement a coordinated, evidence-based, cost-effective, sustainable practice and to standardize care in recommendations for supplemental screening. We formed the Massachusetts Breast Risk Education and Assessment Task Force (MA-BREAST) a multi-institutional, multi-disciplinary panel of expert radiologists, surgeons, primary care physicians, and oncologists to develop a collaborative approach to density notification legislation. Using evidence-based data from the Institute for Clinical and Economic Review, the Cochrane review, National Comprehensive Cancer Network guidelines, American Cancer Society recommendations, and American College of Radiology appropriateness criteria, the group collaboratively developed an evidence-based best-practices algorithm. The expert consensus algorithm uses breast density as one element in the risk stratification to determine the need for supplemental screening. Women with dense breasts and otherwise low risk (<15% lifetime risk), do not routinely require supplemental screening per the expert consensus. Women of high risk (>20% lifetime) should consider supplemental screening MRI in addition to routine mammography regardless of breast density. We report the development of the multi-disciplinary collaborative approach to density notification. We propose a risk stratification algorithm to assess personal level of risk to determine the need for supplemental screening for an individual woman.

  14. a Quadtree Organization Construction and Scheduling Method for Urban 3d Model Based on Weight

    NASA Astrophysics Data System (ADS)

    Yao, C.; Peng, G.; Song, Y.; Duan, M.

    2017-09-01

    The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.

  15. Development of the L-1011 four-dimensional flight management system

    NASA Technical Reports Server (NTRS)

    Lee, H. P.; Leffler, M. F.

    1984-01-01

    The development of 4-D guidance and control algorithms for the L-1011 Flight Management System is described. Four-D Flight Management is a concept by which an aircraft's flight is optimized along the 3-D path within the constraints of today's ATC environment, while its arrival time is controlled to fit into the air traffic flow without incurring or causing delays. The methods developed herein were designed to be compatible with the time-based en route metering techniques that were recently developed by the Dallas/Fort Worth and Denver Air Route Traffic Control Centers. The ensuing development of the 4-D guidance algorithms, the necessary control laws and the operational procedures are discussed. Results of computer simulation evaluation of the guidance algorithms and control laws are presented, along with a description of the software development procedures utilized.

  16. Order management empowering entrepreneurial partnerships in the context of new technologies

    NASA Astrophysics Data System (ADS)

    Tămăşilă, M.; Proştean, G.; Diaconescu, A.

    2018-01-01

    The expansiveness of latest generation technologies triggers manufacturers from different industry sectors more complex situations in order management with various loyal customers and occasional customers. More specifically, orders variations in logistics chain make it difficult to achieve entrepreneurial partnerships in the context of new technologies integrated into automotive and wind industry processes, which hinders getting major investments. Within this framework, the research team investigates the bottlenecks in the supply chain and indicates some rules and methods to solve the desynchronizations and fluctuations caused by the constraints of cutting-edge technologies. The paper aims to solve order management problems based on both an algorithm and an implementation in SAP. Also, in the paper, a conceptual model is created for the user whose basic task is the management of the entrepreneurial orders. Solutions identified based on the algorithm offers an order management plan by optimally adjusting inventories to deal with any kind of orders, thus achieving a profitable entrepreneurial approach between the two partners.

  17. A framework for a diabetes mellitus disease management system in southern Israel.

    PubMed

    Fox, Matthew A; Harman-Boehm, Ilana; Weitzman, Shimon; Zelingher, Julian

    2002-01-01

    Chronic diseases are a significant burden on western healthcare systems and national economies. It has been suggested that automated disease management for chronic disease, like diabetes mellitus (DM), improves the quality of care and reduces inappropriate utilization of diagnostic and therapeutic measures. We have designed a comprehensive DM Disease Management system for the Negev region in southern Israel. This system takes advantage of currently used clinical and administrative information systems. Algorithms for DM disease management have been created based on existing and accepted Israeli guidelines. All data fields and tables in the source information systems have been analyzed, and interfaces for periodic data loads from these systems have been specified. Based on this data, four subsets of decision support algorithms have been developed. The system generates alerts in these domains to multiple end users. We plan to use the products of this information system analysis and disease management specification in the actual development process of such a system shortly.

  18. Automated Conflict Resolution, Arrival Management and Weather Avoidance for ATM

    NASA Technical Reports Server (NTRS)

    Erzberger, H.; Lauderdale, Todd A.; Chu, Yung-Cheng

    2010-01-01

    The paper describes a unified solution to three types of separation assurance problems that occur in en-route airspace: separation conflicts, arrival sequencing, and weather-cell avoidance. Algorithms for solving these problems play a key role in the design of future air traffic management systems such as NextGen. Because these problems can arise simultaneously in any combination, it is necessary to develop integrated algorithms for solving them. A unified and comprehensive solution to these problems provides the foundation for a future air traffic management system that requires a high level of automation in separation assurance. The paper describes the three algorithms developed for solving each problem and then shows how they are used sequentially to solve any combination of these problems. The first algorithm resolves loss-of-separation conflicts and is an evolution of an algorithm described in an earlier paper. The new version generates multiple resolutions for each conflict and then selects the one giving the least delay. Two new algorithms, one for sequencing and merging of arrival traffic, referred to as the Arrival Manager, and the other for weather-cell avoidance are the major focus of the paper. Because these three problems constitute a substantial fraction of the workload of en-route controllers, integrated algorithms to solve them is a basic requirement for automated separation assurance. The paper also reviews the Advanced Airspace Concept, a proposed design for a ground-based system that postulates redundant systems for separation assurance in order to achieve both high levels of safety and airspace capacity. It is proposed that automated separation assurance be introduced operationally in several steps, each step reducing controller workload further while increasing airspace capacity. A fast time simulation was used to determine performance statistics of the algorithm at up to 3 times current traffic levels.

  19. A Toolbox to Improve Algorithms for Insulin-Dosing Decision Support

    PubMed Central

    Donsa, K.; Plank, J.; Schaupp, L.; Mader, J. K.; Truskaller, T.; Tschapeller, B.; Höll, B.; Spat, S.; Pieber, T. R.

    2014-01-01

    Summary Background Standardized insulin order sets for subcutaneous basal-bolus insulin therapy are recommended by clinical guidelines for the inpatient management of diabetes. The algorithm based GlucoTab system electronically assists health care personnel by supporting clinical workflow and providing insulin-dose suggestions. Objective To develop a toolbox for improving clinical decision-support algorithms. Methods The toolbox has three main components. 1) Data preparation: Data from several heterogeneous sources is extracted, cleaned and stored in a uniform data format. 2) Simulation: The effects of algorithm modifications are estimated by simulating treatment workflows based on real data from clinical trials. 3) Analysis: Algorithm performance is measured, analyzed and simulated by using data from three clinical trials with a total of 166 patients. Results Use of the toolbox led to algorithm improvements as well as the detection of potential individualized subgroup-specific algorithms. Conclusion These results are a first step towards individualized algorithm modifications for specific patient subgroups. PMID:25024768

  20. Local flow management/profile descent algorithm. Fuel-efficient, time-controlled profiles for the NASA TSRV airplane

    NASA Technical Reports Server (NTRS)

    Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.

    1986-01-01

    The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.

  1. Development of a Management Algorithm for Post-operative Pain (MAPP) after total knee and total hip replacement: study rationale and design.

    PubMed

    Botti, Mari; Kent, Bridie; Bucknall, Tracey; Duke, Maxine; Johnstone, Megan-Jane; Considine, Julie; Redley, Bernice; Hunter, Susan; de Steiger, Richard; Holcombe, Marlene; Cohen, Emma

    2014-08-28

    Evidence from clinical practice and the extant literature suggests that post-operative pain assessment and treatment is often suboptimal. Poor pain management is likely to persist until pain management practices become consistent with guidelines developed from the best available scientific evidence. This work will address the priority in healthcare of improving the quality of pain management by standardising evidence-based care processes through the incorporation of an algorithm derived from best evidence into clinical practice. In this paper, the methodology for the creation and implementation of such an algorithm that will focus, in the first instance, on patients who have undergone total hip or knee replacement is described. In partnership with clinicians, and based on best available evidence, the aim of the Management Algorithm for Post-operative Pain (MAPP) project is to develop, implement, and evaluate an algorithm designed to support pain management decision-making for patients after orthopaedic surgery. The algorithm will provide guidance for the prescription and administration of multimodal analgesics in the post-operative period, and the treatment of breakthrough pain. The MAPP project is a multisite study with one coordinating hospital and two supporting (rollout) hospitals. The design of this project is a pre-implementation-post-implementation evaluation and will be conducted over three phases. The Promoting Action on Research Implementation in Health Services (PARiHS) framework will be used to guide implementation. Outcome measurements will be taken 10 weeks post-implementation of the MAPP. The primary outcomes are: proportion of patients prescribed multimodal analgesics in accordance with the MAPP; and proportion of patients with moderate to severe pain intensity at rest. These data will be compared to the pre-implementation analgesic prescribing practices and pain outcome measures. A secondary outcome, the efficacy of the MAPP, will be measured by comparing pain intensity scores of patients where the MAPP guidelines were or were not followed. The outcomes of this study have relevance for nursing and medical professionals as well as informing health service evaluation. In establishing a framework for the sustainable implementation and evaluation of a standardised approach to post-operative pain management, the findings have implications for clinicians and patients within multiple surgical contexts.

  2. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  3. GOES-R GS Product Generation Infrastructure Operations

    NASA Astrophysics Data System (ADS)

    Blanton, M.; Gundy, J.

    2012-12-01

    GOES-R GS Product Generation Infrastructure Operations: The GOES-R Ground System (GS) will produce a much larger set of products with higher data density than previous GOES systems. This requires considerably greater compute and memory resources to achieve the necessary latency and availability for these products. Over time, new algorithms could be added and existing ones removed or updated, but the GOES-R GS cannot go down during this time. To meet these GOES-R GS processing needs, the Harris Corporation will implement a Product Generation (PG) infrastructure that is scalable, extensible, extendable, modular and reliable. The primary parts of the PG infrastructure are the Service Based Architecture (SBA), which includes the Distributed Data Fabric (DDF). The SBA is the middleware that encapsulates and manages science algorithms that generate products. The SBA is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. The SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DDF to provide this data communication layer between algorithms. The DDF provides an abstract interface over a distributed and persistent multi-layered storage system (memory based caching above disk-based storage) and an event system that allows algorithm services to know when data is available and to get the data that they need to begin processing when they need it. Together, the SBA and the DDF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.

  4. Load Balancing Integrated Least Slack Time-Based Appliance Scheduling for Smart Home Energy Management.

    PubMed

    Silva, Bhagya Nathali; Khan, Murad; Han, Kijun

    2018-02-25

    The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST) algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism.

  5. How anaesthesiologists understand difficult airway guidelines-an interview study.

    PubMed

    Knudsen, Kati; Pöder, Ulrika; Nilsson, Ulrica; Högman, Marieann; Larsson, Anders; Larsson, Jan

    2017-11-01

    In the practice of anaesthesia, clinical guidelines that aim to improve the safety of airway procedures have been developed. The aim of this study was to explore how anaesthesiologists understand or conceive of difficult airway management algorithms. A qualitative phenomenographic design was chosen to explore anaesthesiologists' views on airway algorithms. Anaesthesiologists working in three hospitals were included. Individual face-to-face interviews were conducted. Four different ways of understanding were identified, describing airway algorithms as: (A) a law-like rule for how to act in difficult airway situations; (B) a cognitive aid, an action plan for difficult airway situations; (C) a basis for developing flexible, personal action plans for the difficult airway; and (D) the experts' consensus, a set of scientifically based guidelines for handling the difficult airway. The interviewed anaesthesiologists understood difficult airway management guidelines/algorithms very differently.

  6. Simultaneous optimization of the cavity heat load and trip rates in linacs using a genetic algorithm

    DOE PAGES

    Terzić, Balša; Hofler, Alicia S.; Reeves, Cody J.; ...

    2014-10-15

    In this paper, a genetic algorithm-based optimization is used to simultaneously minimize two competing objectives guiding the operation of the Jefferson Lab's Continuous Electron Beam Accelerator Facility linacs: cavity heat load and radio frequency cavity trip rates. The results represent a significant improvement to the standard linac energy management tool and thereby could lead to a more efficient Continuous Electron Beam Accelerator Facility configuration. This study also serves as a proof of principle of how a genetic algorithm can be used for optimizing other linac-based machines.

  7. Development of the IMB Model and an Evidence-Based Diabetes Self-management Mobile Application.

    PubMed

    Jeon, Eunjoo; Park, Hyeoun-Ae

    2018-04-01

    This study developed a diabetes self-management mobile application based on the information-motivation-behavioral skills (IMB) model, evidence extracted from clinical practice guidelines, and requirements identified through focus group interviews (FGIs) with diabetes patients. We developed a diabetes self-management (DSM) app in accordance with the following four stages of the system development life cycle. The functional and knowledge requirements of the users were extracted through FGIs with 19 diabetes patients. A system diagram, data models, a database, an algorithm, screens, and menus were designed. An Android app and server with an SSL protocol were developed. The DSM app algorithm and heuristics, as well as the usability of the DSM app were evaluated, and then the DSM app was modified based on heuristics and usability evaluation. A total of 11 requirement themes were identified through the FGIs. Sixteen functions and 49 knowledge rules were extracted. The system diagram consisted of a client part and server part, 78 data models, a database with 10 tables, an algorithm, and a menu structure with 6 main menus, and 40 user screens were developed. The DSM app was Android version 4.4 or higher for Bluetooth connectivity. The proficiency and efficiency scores of the algorithm were 90.96% and 92.39%, respectively. Fifteen issues were revealed through the heuristic evaluation, and the app was modified to address three of these issues. It was also modified to address five comments received by the researchers through the usability evaluation. The DSM app was developed based on behavioral change theory through IMB models. It was designed to be evidence-based, user-centered, and effective. It remains necessary to fully evaluate the effect of the DSM app on the DSM behavior changes of diabetes patients.

  8. Development of the IMB Model and an Evidence-Based Diabetes Self-management Mobile Application

    PubMed Central

    Jeon, Eunjoo

    2018-01-01

    Objectives This study developed a diabetes self-management mobile application based on the information-motivation-behavioral skills (IMB) model, evidence extracted from clinical practice guidelines, and requirements identified through focus group interviews (FGIs) with diabetes patients. Methods We developed a diabetes self-management (DSM) app in accordance with the following four stages of the system development life cycle. The functional and knowledge requirements of the users were extracted through FGIs with 19 diabetes patients. A system diagram, data models, a database, an algorithm, screens, and menus were designed. An Android app and server with an SSL protocol were developed. The DSM app algorithm and heuristics, as well as the usability of the DSM app were evaluated, and then the DSM app was modified based on heuristics and usability evaluation. Results A total of 11 requirement themes were identified through the FGIs. Sixteen functions and 49 knowledge rules were extracted. The system diagram consisted of a client part and server part, 78 data models, a database with 10 tables, an algorithm, and a menu structure with 6 main menus, and 40 user screens were developed. The DSM app was Android version 4.4 or higher for Bluetooth connectivity. The proficiency and efficiency scores of the algorithm were 90.96% and 92.39%, respectively. Fifteen issues were revealed through the heuristic evaluation, and the app was modified to address three of these issues. It was also modified to address five comments received by the researchers through the usability evaluation. Conclusions The DSM app was developed based on behavioral change theory through IMB models. It was designed to be evidence-based, user-centered, and effective. It remains necessary to fully evaluate the effect of the DSM app on the DSM behavior changes of diabetes patients. PMID:29770246

  9. Management and prevention of refeeding syndrome in medical inpatients: An evidence-based and consensus-supported algorithm.

    PubMed

    Friedli, Natalie; Stanga, Zeno; Culkin, Alison; Crook, Martin; Laviano, Alessandro; Sobotka, Lubos; Kressig, Reto W; Kondrup, Jens; Mueller, Beat; Schuetz, Philipp

    2018-03-01

    Refeeding syndrome (RFS) can be a life-threatening metabolic condition after nutritional replenishment if not recognized early and treated adequately. There is a lack of evidence-based treatment and monitoring algorithm for daily clinical practice. The aim of the study was to propose an expert consensus guideline for RFS for the medical inpatient (not including anorexic patients) regarding risk factors, diagnostic criteria, and preventive and therapeutic measures based on a previous systematic literature search. Based on a recent qualitative systematic review on the topic, we developed clinically relevant recommendations as well as a treatment and monitoring algorithm for the clinical management of inpatients regarding RFS. With international experts, these recommendations were discussed and agreement with the recommendation was rated. Upon hospital admission, we recommend the use of specific screening criteria (i.e., low body mass index, large unintentional weight loss, little or no nutritional intake, history of alcohol or drug abuse) for risk assessment regarding the occurrence of RFS. According to the patient's individual risk for RFS, a careful start of nutritional therapy with a stepwise increase in energy and fluids goals and supplementation of electrolyte and vitamins, as well as close clinical monitoring, is recommended. We also propose criteria for the diagnosis of imminent and manifest RFS with practical treatment recommendations with adoption of the nutritional therapy. Based on the available evidence, we developed a practical algorithm for risk assessment, treatment, and monitoring of RFS in medical inpatients. In daily routine clinical care, this may help to optimize and standardize the management of this vulnerable patient population. We encourage future quality studies to further refine these recommendations. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Data mining for multiagent rules, strategies, and fuzzy decision tree structure

    NASA Astrophysics Data System (ADS)

    Smith, James F., III; Rhyne, Robert D., II; Fisher, Kristin

    2002-03-01

    A fuzzy logic based resource manager (RM) has been developed that automatically allocates electronic attack resources in real-time over many dissimilar platforms. Two different data mining algorithms have been developed to determine rules, strategies, and fuzzy decision tree structure. The first data mining algorithm uses a genetic algorithm as a data mining function and is called from an electronic game. The game allows a human expert to play against the resource manager in a simulated battlespace with each of the defending platforms being exclusively directed by the fuzzy resource manager and the attacking platforms being controlled by the human expert or operating autonomously under their own logic. This approach automates the data mining problem. The game automatically creates a database reflecting the domain expert's knowledge. It calls a data mining function, a genetic algorithm, for data mining of the database as required and allows easy evaluation of the information mined in the second step. The criterion for re- optimization is discussed as well as experimental results. Then a second data mining algorithm that uses a genetic program as a data mining function is introduced to automatically discover fuzzy decision tree structures. Finally, a fuzzy decision tree generated through this process is discussed.

  11. Soil water balance calculation using a two source energy balance model and wireless sensor arrays aboard a center pivot

    USDA-ARS?s Scientific Manuscript database

    Recent developments in wireless sensor technology and remote sensing algorithms, coupled with increased use of center pivot irrigation systems, have removed several long-standing barriers to adoption of remote sensing for real-time irrigation management. One remote sensing-based algorithm is a two s...

  12. An optimization design for evacuation planning based on fuzzy credibility theory and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, D.; Zhang, W. Y.

    2017-08-01

    Evacuation planning is an important activity in disaster management. It has to be planned in advance due to the unpredictable occurrence of disasters. It is necessary that the evacuation plans are as close as possible to the real evacuation work. However, the evacuation plan is extremely challenging because of the inherent uncertainty of the required information. There is a kind of vehicle routing problem based on the public traffic evacuation. In this paper, the demand for each evacuation set point is a fuzzy number, and each routing selection of the point is based on the fuzzy credibility preference index. This paper proposes an approximate optimal solution for this problem by the genetic algorithm based on the fuzzy reliability theory. Finally, the algorithm is applied to an optimization model, and the experiment result shows that the algorithm is effective.

  13. Knowledge-based vision for space station object motion detection, recognition, and tracking

    NASA Technical Reports Server (NTRS)

    Symosek, P.; Panda, D.; Yalamanchili, S.; Wehner, W., III

    1987-01-01

    Computer vision, especially color image analysis and understanding, has much to offer in the area of the automation of Space Station tasks such as construction, satellite servicing, rendezvous and proximity operations, inspection, experiment monitoring, data management and training. Knowledge-based techniques improve the performance of vision algorithms for unstructured environments because of their ability to deal with imprecise a priori information or inaccurately estimated feature data and still produce useful results. Conventional techniques using statistical and purely model-based approaches lack flexibility in dealing with the variabilities anticipated in the unstructured viewing environment of space. Algorithms developed under NASA sponsorship for Space Station applications to demonstrate the value of a hypothesized architecture for a Video Image Processor (VIP) are presented. Approaches to the enhancement of the performance of these algorithms with knowledge-based techniques and the potential for deployment of highly-parallel multi-processor systems for these algorithms are discussed.

  14. Fuzzy-logic based Q-Learning interference management algorithms in two-tier networks

    NASA Astrophysics Data System (ADS)

    Xu, Qiang; Xu, Zezhong; Li, Li; Zheng, Yan

    2017-10-01

    Unloading from macrocell network and enhancing coverage can be realized by deploying femtocells in the indoor scenario. However, the system performance of the two-tier network could be impaired by the co-tier and cross-tier interference. In this paper, a distributed resource allocation scheme is studied when each femtocell base station is self-governed and the resource cannot be assigned centrally through the gateway. A novel Q-Learning interference management scheme is proposed, that is divided into cooperative and independent part. In the cooperative algorithm, the interference information is exchanged between the cell-edge users which are classified by the fuzzy logic in the same cell. Meanwhile, we allocate the orthogonal subchannels to the high-rate cell-edge users to disperse the interference power when the data rate requirement is satisfied. The resource is assigned directly according to the minimum power principle in the independent algorithm. Simulation results are provided to demonstrate the significant performance improvements in terms of the average data rate, interference power and energy efficiency over the cutting-edge resource allocation algorithms.

  15. Comparison of Body Weight Trend Algorithms for Prediction of Heart Failure Related Events in Home Care Setting.

    PubMed

    Eggerth, Alphons; Modre-Osprian, Robert; Hayn, Dieter; Kastner, Peter; Pölzl, Gerhard; Schreier, Günter

    2017-01-01

    Automatic event detection is used in telemedicine based heart failure disease management programs supporting physicians and nurses in monitoring of patients' health data. Analysis of the performance of automatic event detection algorithms for prediction of HF related hospitalisations or diuretic dose increases. Rule-Of-Thumb and Moving Average Convergence Divergence (MACD) algorithm were applied to body weight data from 106 heart failure patients of the HerzMobil-Tirol disease management program. The evaluation criteria were based on Youden index and ROC curves. Analysis of data from 1460 monitoring weeks with 54 events showed a maximum Youden index of 0.19 for MACD and RoT with a specificity > 0.90. Comparison of the two algorithms for real-world monitoring data showed similar results regarding total and limited AUC. An improvement of the sensitivity might be possible by including additional health data (e.g. vital signs and self-reported well-being) because body weight variations obviously are not the only cause of HF related hospitalisations or diuretic dose increases.

  16. Design factors and considerations for a time-based flight management system

    NASA Technical Reports Server (NTRS)

    Vicroy, D. D.; Williams, D. H.; Sorensen, J. A.

    1986-01-01

    Recent NASA Langley Research Center research to develop a technology data base from which an advanced Flight Management System (FMS) design might evolve is reviewed. In particular, the generation of fixed range cruise/descent reference trajectories which meet predefined end conditions of altitude, speed, and time is addressed. Results on the design and theoretical basis of the trajectory generation algorithm are presented, followed by a brief discussion of a series of studies that are being conducted to determine the accuracy requirements of the aircraft and weather models resident in the trajectory generation algorithm. Finally, studies to investigate the interface requirements between the pilot and an advanced FMS are considered.

  17. Feasibility of using algorithm-based clinical decision support for symptom assessment and management in lung cancer.

    PubMed

    Cooley, Mary E; Blonquist, Traci M; Catalano, Paul J; Lobach, David F; Halpenny, Barbara; McCorkle, Ruth; Johns, Ellis B; Braun, Ilana M; Rabin, Michael S; Mataoui, Fatma Zohra; Finn, Kathleen; Berry, Donna L; Abrahm, Janet L

    2015-01-01

    Distressing symptoms interfere with the quality of life in patients with lung cancer. Algorithm-based clinical decision support (CDS) to improve evidence-based management of isolated symptoms seems promising, but no reports yet address multiple symptoms. This study examined the feasibility of CDS for a Symptom Assessment and Management Intervention targeting common symptoms in patients with lung cancer (SAMI-L) in ambulatory oncology. The study objectives were to evaluate completion and delivery rates of the SAMI-L report and clinician adherence to the algorithm-based recommendations. Patients completed a web-based symptom assessment and SAMI-L created tailored recommendations for symptom management. Completion of assessments and delivery of reports were recorded. Medical record review assessed clinician adherence to recommendations. Feasibility was defined as 75% or higher report completion and delivery rates and 80% or higher clinician adherence to recommendations. Descriptive statistics and generalized estimating equations were used for data analyses. Symptom assessment completion was 84% (95% CI=81-87%). Delivery of completed reports was 90% (95% CI=86-93%). Depression (36%), pain (30%), and fatigue (18%) occurred most frequently, followed by anxiety (11%) and dyspnea (6%). On average, overall recommendation adherence was 57% (95% CI=52-62%) and was not dependent on the number of recommendations (P=0.45). Adherence was higher for anxiety (66%; 95% CI=55-77%), depression (64%; 95% CI=56-71%), pain (62%; 95% CI=52-72%), and dyspnea (51%; 95% CI=38-64%) than for fatigue (38%; 95% CI=28-47%). The CDS systems, such as SAMI-L, have the potential to fill a gap in promoting evidence-based care. Copyright © 2015 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  18. Breast Cancer Screening in the Era of Density Notification Legislation: Summary of 2014 Massachusetts Experience and Suggestion of An Evidence-Based Management Algorithm by Multi-disciplinary Expert Panel

    PubMed Central

    Freer, Phoebe E.; Slanetz, Priscilla J.; Haas, Jennifer S.; Tung, Nadine M.; Hughes, Kevin S.; Armstrong, Katrina; Semine, A. Alan; Troyan, Susan L.; Birdwell, Robyn L.

    2015-01-01

    Purpose Stemming from breast density notification legislation in Massachusetts effective 2015, we sought to develop a collaborative evidence-based approach to density notification that could be used by practitioners across the state. Our goal was to develop an evidence-based consensus management algorithm to help patients and health care providers follow best practices to implement a coordinated, evidence-based, cost-effective, sustainable practice and to standardize care in recommendations for supplemental screening. Methods We formed the Massachusetts Breast Risk Education and Assessment Task Force (MA-BREAST) a multi-institutional, multi-disciplinary panel of expert radiologists, surgeons, primary care physicians, and oncologists to develop a collaborative approach to density notification legislation. Using evidence-based data from the Institute for Clinical and Economic Review (ICER), the Cochrane review, National Comprehensive Cancer Network (NCCN) guidelines, American Cancer Society (ACS) recommendations, and American College of Radiology (ACR) appropriateness criteria, the group collaboratively developed an evidence-based best-practices algorithm. Results The expert consensus algorithm uses breast density as one element in the risk stratification to determine the need for supplemental screening. Women with dense breasts and otherwise low risk (<15% lifetime risk), do not routinely require supplemental screening per the expert consensus. Women of high risk (>20% lifetime) should consider supplemental screening MRI in addition to routine mammography regardless of breast density. Conclusion We report the development of the multi-disciplinary collaborative approach to density notification. We propose a risk stratification algorithm to assess personal level of risk to determine the need for supplemental screening for an individual woman. PMID:26290416

  19. The comparative and cost-effectiveness of HPV-based cervical cancer screening algorithms in El Salvador.

    PubMed

    Campos, Nicole G; Maza, Mauricio; Alfaro, Karla; Gage, Julia C; Castle, Philip E; Felix, Juan C; Cremer, Miriam L; Kim, Jane J

    2015-08-15

    Cervical cancer is the leading cause of cancer death among women in El Salvador. Utilizing data from the Cervical Cancer Prevention in El Salvador (CAPE) demonstration project, we assessed the health and economic impact of HPV-based screening and two different algorithms for the management of women who test HPV-positive, relative to existing Pap-based screening. We calibrated a mathematical model of cervical cancer to epidemiologic data from El Salvador and compared three screening algorithms for women aged 30-65 years: (i) HPV screening every 5 years followed by referral to colposcopy for HPV-positive women (Colposcopy Management [CM]); (ii) HPV screening every 5 years followed by treatment with cryotherapy for eligible HPV-positive women (Screen and Treat [ST]); and (iii) Pap screening every 2 years followed by referral to colposcopy for Pap-positive women (Pap). Potential harms and complications associated with overtreatment were not assessed. Under base case assumptions of 65% screening coverage, HPV-based screening was more effective than Pap, reducing cancer risk by ∼ 60% (Pap: 50%). ST was the least costly strategy, and cost $2,040 per year of life saved. ST remained the most attractive strategy as visit compliance, costs, coverage, and test performance were varied. We conclude that a screen-and-treat algorithm within an HPV-based screening program is very cost-effective in El Salvador, with a cost-effectiveness ratio below per capita GDP. © 2015 UICC.

  20. Air traffic surveillance and control using hybrid estimation and protocol-based conflict resolution

    NASA Astrophysics Data System (ADS)

    Hwang, Inseok

    The continued growth of air travel and recent advances in new technologies for navigation, surveillance, and communication have led to proposals by the Federal Aviation Administration (FAA) to provide reliable and efficient tools to aid Air Traffic Control (ATC) in performing their tasks. In this dissertation, we address four problems frequently encountered in air traffic surveillance and control; multiple target tracking and identity management, conflict detection, conflict resolution, and safety verification. We develop a set of algorithms and tools to aid ATC; These algorithms have the provable properties of safety, computational efficiency, and convergence. Firstly, we develop a multiple-maneuvering-target tracking and identity management algorithm which can keep track of maneuvering aircraft in noisy environments and of their identities. Secondly, we propose a hybrid probabilistic conflict detection algorithm between multiple aircraft which uses flight mode estimates as well as aircraft current state estimates. Our algorithm is based on hybrid models of aircraft, which incorporate both continuous dynamics and discrete mode switching. Thirdly, we develop an algorithm for multiple (greater than two) aircraft conflict avoidance that is based on a closed-form analytic solution and thus provides guarantees of safety. Finally, we consider the problem of safety verification of control laws for safety critical systems, with application to air traffic control systems. We approach safety verification through reachability analysis, which is a computationally expensive problem. We develop an over-approximate method for reachable set computation using polytopic approximation methods and dynamic optimization. These algorithms may be used either in a fully autonomous way, or as supporting tools to increase controllers' situational awareness and to reduce their work load.

  1. Sensor management in RADAR/IRST track fusion

    NASA Astrophysics Data System (ADS)

    Hu, Shi-qiang; Jing, Zhong-liang

    2004-07-01

    In this paper, a novel radar management strategy technique suitable for RADAR/IRST track fusion, which is based on Fisher Information Matrix (FIM) and fuzzy stochastic decision approach, is put forward. Firstly, optimal radar measurements' scheduling is obtained by the method of maximizing determinant of the Fisher information matrix of radar and IRST measurements, which is managed by the expert system. Then, suggested a "pseudo sensor" to predict the possible target position using the polynomial method based on the radar and IRST measurements, using "pseudo sensor" model to estimate the target position even if the radar is turned off. At last, based on the tracking performance and the state of target maneuver, fuzzy stochastic decision is used to adjust the optimal radar scheduling and retrieve the module parameter of "pseudo sensor". The experiment result indicates that the algorithm can not only limit Radar activity effectively but also keep the tracking accuracy of active/passive system well. And this algorithm eliminates the drawback of traditional Radar management methods that the Radar activity is fixed and not easy to control and protect.

  2. Comparison of Controller and Flight Deck Algorithm Performance During Interval Management with Dynamic Arrival Trees (STARS)

    NASA Technical Reports Server (NTRS)

    Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.

    2012-01-01

    Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.

  3. Evaluation of Anomaly Detection Capability for Ground-Based Pre-Launch Shuttle Operations. Chapter 8

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander

    2010-01-01

    This chapter will provide a thorough end-to-end description of the process for evaluation of three different data-driven algorithms for anomaly detection to select the best candidate for deployment as part of a suite of IVHM (Integrated Vehicle Health Management) technologies. These algorithms were deemed to be sufficiently mature enough to be considered viable candidates for deployment in support of the maiden launch of Ares I-X, the successor to the Space Shuttle for NASA's Constellation program. Data-driven algorithms are just one of three different types being deployed. The other two types of algorithms being deployed include a "nile-based" expert system, and a "model-based" system. Within these two categories, the deployable candidates have already been selected based upon qualitative factors such as flight heritage. For the rule-based system, SHINE (Spacecraft High-speed Inference Engine) has been selected for deployment, which is a component of BEAM (Beacon-based Exception Analysis for Multimissions), a patented technology developed at NASA's JPL (Jet Propulsion Laboratory) and serves to aid in the management and identification of operational modes. For the "model-based" system, a commercially available package developed by QSI (Qualtech Systems, Inc.), TEAMS (Testability Engineering and Maintenance System) has been selected for deployment to aid in diagnosis. In the context of this particular deployment, distinctions among the use of the terms "data-driven," "rule-based," and "model-based," can be found in. Although there are three different categories of algorithms that have been selected for deployment, our main focus in this chapter will be on the evaluation of three candidates for data-driven anomaly detection. These algorithms will be evaluated upon their capability for robustly detecting incipient faults or failures in the ground-based phase of pre-launch space shuttle operations, rather than based oil heritage as performed in previous studies. Robust detection will allow for the achievement of pre-specified minimum false alarm and/or missed detection rates in the selection of alert thresholds. All algorithms will also be optimized with respect to an aggregation of these same criteria. Our study relies upon the use of Shuttle data to act as was a proxy for and in preparation for application to Ares I-X data, which uses a very similar hardware platform for the subsystems that are being targeted (TVC - Thrust Vector Control subsystem for the SRB (Solid Rocket Booster)).

  4. Remembering the Important Things: Semantic Importance in Stream Reasoning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Rui; Greaves, Mark T.; Smith, William P.

    Reasoning and querying over data streams rely on the abil- ity to deliver a sequence of stream snapshots to the processing algo- rithms. These snapshots are typically provided using windows as views into streams and associated window management strategies. Generally, the goal of any window management strategy is to preserve the most im- portant data in the current window and preferentially evict the rest, so that the retained data can continue to be exploited. A simple timestamp- based strategy is rst-in-rst-out (FIFO), in which items are replaced in strict order of arrival. All timestamp-based strategies implicitly assume that a temporalmore » ordering reliably re ects importance to the processing task at hand, and thus that window management using timestamps will maximize the ability of the processing algorithms to deliver accurate interpretations of the stream. In this work, we explore a general no- tion of semantic importance that can be used for window management for streams of RDF data using semantically-aware processing algorithms like deduction or semantic query. Semantic importance exploits the infor- mation carried in RDF and surrounding ontologies for ranking window data in terms of its likely contribution to the processing algorithms. We explore the general semantic categories of query contribution, prove- nance, and trustworthiness, as well as the contribution of domain-specic ontologies. We describe how these categories behave using several con- crete examples. Finally, we consider how a stream window management strategy based on semantic importance could improve overall processing performance, especially as available window sizes decrease.« less

  5. How anaesthesiologists understand difficult airway guidelines—an interview study

    PubMed Central

    Knudsen, Kati; Nilsson, Ulrica; Larsson, Anders; Larsson, Jan

    2017-01-01

    Background In the practice of anaesthesia, clinical guidelines that aim to improve the safety of airway procedures have been developed. The aim of this study was to explore how anaesthesiologists understand or conceive of difficult airway management algorithms. Methods A qualitative phenomenographic design was chosen to explore anaesthesiologists’ views on airway algorithms. Anaesthesiologists working in three hospitals were included. Individual face-to-face interviews were conducted. Results Four different ways of understanding were identified, describing airway algorithms as: (A) a law-like rule for how to act in difficult airway situations; (B) a cognitive aid, an action plan for difficult airway situations; (C) a basis for developing flexible, personal action plans for the difficult airway; and (D) the experts’ consensus, a set of scientifically based guidelines for handling the difficult airway. Conclusions The interviewed anaesthesiologists understood difficult airway management guidelines/algorithms very differently. PMID:29299973

  6. Development of MODIS data-based algorithm for retrieving sea surface temperature in coastal waters.

    PubMed

    Wang, Jiao; Deng, Zhiqiang

    2017-06-01

    A new algorithm was developed for retrieving sea surface temperature (SST) in coastal waters using satellite remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua platform. The new SST algorithm was trained using the Artificial Neural Network (ANN) method and tested using 8 years of remote sensing data from MODIS Aqua sensor and in situ sensing data from the US coastal waters in Louisiana, Texas, Florida, California, and New Jersey. The ANN algorithm could be utilized to map SST in both deep offshore and particularly shallow nearshore waters at the high spatial resolution of 1 km, greatly expanding the coverage of remote sensing-based SST data from offshore waters to nearshore waters. Applications of the ANN algorithm require only the remotely sensed reflectance values from the two MODIS Aqua thermal bands 31 and 32 as input data. Application results indicated that the ANN algorithm was able to explaining 82-90% variations in observed SST in US coastal waters. While the algorithm is generally applicable to the retrieval of SST, it works best for nearshore waters where important coastal resources are located and existing algorithms are either not applicable or do not work well, making the new ANN-based SST algorithm unique and particularly useful to coastal resource management.

  7. An Ensemble Approach in Converging Contents of LMS and KMS

    ERIC Educational Resources Information Center

    Sabitha, A. Sai; Mehrotra, Deepti; Bansal, Abhay

    2017-01-01

    Currently the challenges in e-Learning are converging the learning content from various sources and managing them within e-learning practices. Data mining learning algorithms can be used and the contents can be converged based on the Metadata of the objects. Ensemble methods use multiple learning algorithms and it can be used to converge the…

  8. Frequency Management for Electromagnetic Continuous Wave Conductivity Meters

    PubMed Central

    Mazurek, Przemyslaw; Putynkowski, Grzegorz

    2016-01-01

    Ground conductivity meters use electromagnetic fields for the mapping of geological variations, like the determination of water amount, depending on ground layers, which is important for the state analysis of embankments. The VLF band is contaminated by numerous natural and artificial electromagnetic interference signals. Prior to the determination of ground conductivity, the meter’s working frequency is not possible, due to the variable frequency of the interferences. Frequency management based on the analysis of the selected band using track-before-detect (TBD) algorithms, which allows dynamical frequency changes of the conductivity of the meter transmitting part, is proposed in the paper. Naive maximum value search, spatio-temporal TBD (ST-TBD), Viterbi TBD and a new algorithm that uses combined ST-TBD and Viterbi TBD are compared. Monte Carlo tests are provided for the numerical analysis of the properties for a single interference signal in the considered band, and a new approach based on combined ST-TBD and Viterbi algorithms shows the best performance. The considered algorithms process spectrogram data for the selected band, so DFT (Discrete Fourier Transform) could be applied for the computation of the spectrogram. Real–time properties, related to the latency, are discussed also, and it is shown that TBD algorithms are feasible for real applications. PMID:27070608

  9. Frequency Management for Electromagnetic Continuous Wave Conductivity Meters.

    PubMed

    Mazurek, Przemyslaw; Putynkowski, Grzegorz

    2016-04-07

    Ground conductivity meters use electromagnetic fields for the mapping of geological variations, like the determination of water amount, depending on ground layers, which is important for the state analysis of embankments. The VLF band is contaminated by numerous natural and artificial electromagnetic interference signals. Prior to the determination of ground conductivity, the meter's working frequency is not possible, due to the variable frequency of the interferences. Frequency management based on the analysis of the selected band using track-before-detect (TBD) algorithms, which allows dynamical frequency changes of the conductivity of the meter transmitting part, is proposed in the paper. Naive maximum value search, spatio-temporal TBD (ST-TBD), Viterbi TBD and a new algorithm that uses combined ST-TBD and Viterbi TBD are compared. Monte Carlo tests are provided for the numerical analysis of the properties for a single interference signal in the considered band, and a new approach based on combined ST-TBD and Viterbi algorithms shows the best performance. The considered algorithms process spectrogram data for the selected band, so DFT (Discrete Fourier Transform) could be applied for the computation of the spectrogram. Real-time properties, related to the latency, are discussed also, and it is shown that TBD algorithms are feasible for real applications.

  10. Clinical algorithms to aid osteoarthritis guideline dissemination.

    PubMed

    Meneses, S R F; Goode, A P; Nelson, A E; Lin, J; Jordan, J M; Allen, K D; Bennell, K L; Lohmander, L S; Fernandes, L; Hochberg, M C; Underwood, M; Conaghan, P G; Liu, S; McAlindon, T E; Golightly, Y M; Hunter, D J

    2016-09-01

    Numerous scientific organisations have developed evidence-based recommendations aiming to optimise the management of osteoarthritis (OA). Uptake, however, has been suboptimal. The purpose of this exercise was to harmonize the recent recommendations and develop a user-friendly treatment algorithm to facilitate translation of evidence into practice. We updated a previous systematic review on clinical practice guidelines (CPGs) for OA management. The guidelines were assessed using the Appraisal of Guidelines for Research and Evaluation for quality and the standards for developing trustworthy CPGs as established by the National Academy of Medicine (NAM). Four case scenarios and algorithms were developed by consensus of a multidisciplinary panel. Sixteen guidelines were included in the systematic review. Most recommendations were directed toward physicians and allied health professionals, and most had multi-disciplinary input. Analysis for trustworthiness suggests that many guidelines still present a lack of transparency. A treatment algorithm was developed for each case scenario advised by recommendations from guidelines and based on panel consensus. Strategies to facilitate the implementation of guidelines in clinical practice are necessary. The algorithms proposed are examples of how to apply recommendations in the clinical context, helping the clinician to visualise the patient flow and timing of different treatment modalities. Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  11. An adaptive transmission protocol for managing dynamic shared states in collaborative surgical simulation.

    PubMed

    Qin, J; Choi, K S; Ho, Simon S M; Heng, P A

    2008-01-01

    A force prediction algorithm is proposed to facilitate virtual-reality (VR) based collaborative surgical simulation by reducing the effect of network latencies. State regeneration is used to correct the estimated prediction. This algorithm is incorporated into an adaptive transmission protocol in which auxiliary features such as view synchronization and coupling control are equipped to ensure the system consistency. We implemented this protocol using multi-threaded technique on a cluster-based network architecture.

  12. A Distributed Dynamic Programming-Based Solution for Load Management in Smart Grids

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Xu, Yinliang; Li, Sisi; Zhou, MengChu; Liu, Wenxin; Xu, Ying

    2018-03-01

    Load management is being recognized as an important option for active user participation in the energy market. Traditional load management methods usually require a centralized powerful control center and a two-way communication network between the system operators and energy end-users. The increasing user participation in smart grids may limit their applications. In this paper, a distributed solution for load management in emerging smart grids is proposed. The load management problem is formulated as a constrained optimization problem aiming at maximizing the overall utility of users while meeting the requirement for load reduction requested by the system operator, and is solved by using a distributed dynamic programming algorithm. The algorithm is implemented via a distributed framework and thus can deliver a highly desired distributed solution. It avoids the required use of a centralized coordinator or control center, and can achieve satisfactory outcomes for load management. Simulation results with various test systems demonstrate its effectiveness.

  13. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems

    PubMed Central

    Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000

  14. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.

    PubMed

    Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.

  15. Fast intersection detection algorithm for PC-based robot off-line programming

    NASA Astrophysics Data System (ADS)

    Fedrowitz, Christian H.

    1994-11-01

    This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.

  16. Endonasal management of pediatric congenital transsphenoidal encephaloceles: nuances of a modified reconstruction technique. Technical note and report of 3 cases.

    PubMed

    Zeinalizadeh, Mehdi; Sadrehosseini, Seyed Mousa; Habibi, Zohreh; Nejat, Farideh; Silva, Harley Brito da; Singh, Harminder

    2017-03-01

    OBJECTIVE Congenital transsphenoidal encephaloceles are rare malformations, and their surgical treatment remains challenging. This paper reports 3 cases of transsphenoidal encephalocele in 8- to 24-month-old infants, who presented mainly with airway obstruction, respiratory distress, and failure to thrive. METHODS The authors discuss the surgical management of these lesions via a minimally invasive endoscopic endonasal approach, as compared with the traditional transcranial and transpalatal approaches. A unique endonasal management algorithm for these lesions is outlined. The lesions were repaired with no resection of the encephalocele sac, and the cranial base defects were reconstructed with titanium mesh plates and vascular nasoseptal flaps. RESULTS Reduction of the encephalocele and reconstruction of the skull base was successfully accomplished in all 3 cases, with favorable results. CONCLUSIONS The described endonasal management algorithm for congenital transsphenoidal encephaloceles is a safe, viable alternative to traditional transcranial and transpalatal approaches, and avoids much of the morbidity associated with these open techniques.

  17. Use of chronic disease management algorithms in Australian community pharmacies.

    PubMed

    Morrissey, Hana; Ball, Patrick; Jackson, David; Pilloto, Louis; Nielsen, Sharon

    2015-01-01

    In Australia, standardized chronic disease management algorithms are available for medical practitioners, nursing practitioners and nurses through a range of sources including prescribing software, manuals and through government and not-for-profit non-government organizations. There is currently no standardized algorithm for pharmacist intervention in the management of chronic diseases.. To investigate if a collaborative community pharmacists and doctors' model of care in chronic disease management could improve patients' outcomes through ongoing monitoring of disease biochemical markers, robust self-management skills and better medication adherence. This project was a pilot pragmatic study, measuring the effect of the intervention by comparing the baseline and the end of the study patient health outcomes, to support future definitive studies. Algorithms for selected chronic conditions were designed, based on the World Health Organisation STEPS™ process and Central Australia Rural Practitioners' Association Standard Treatment Manual. They were evaluated in community pharmacies in 8 inland Australian small towns, mostly having only one pharmacy in order to avoid competition issues. The algorithms were reviewed by Murrumbidgee Medicare Local Ltd, New South Wales, Australia, Quality use of Medicines committee. They constitute a pharmacist-driven, doctor/pharmacist collaboration primary care model. The Pharmacy owners volunteered to take part in the study and patients were purposefully recruited by in-store invitation. Six out of 9 sites' pharmacists (67%) were fully capable of delivering the algorithm (each site had 3 pharmacists), one site (11%) with 2 pharmacists, found it too difficult and withdrew from the study, and 2 sites (22%, with one pharmacist at each site) stated that they were personally capable of delivering the algorithm but unable to do so due to workflow demands. This primary care model can form the basis of workable collaboration between doctors and pharmacists ensuring continuity of care for patients. It has potential for rural and remote areas of Australia where this continuity of care may be problematic. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes.

  19. Use of Management Pathways or Algorithms in Children With Chronic Cough: CHEST Guideline and Expert Panel Report.

    PubMed

    Chang, Anne B; Oppenheimer, John J; Weinberger, Miles M; Rubin, Bruce K; Weir, Kelly; Grant, Cameron C; Irwin, Richard S

    2017-04-01

    Using management algorithms or pathways potentially improves clinical outcomes. We undertook systematic reviews to examine various aspects in the generic approach (use of cough algorithms and tests) to the management of chronic cough in children (aged ≤ 14 years) based on key questions (KQs) using the Population, Intervention, Comparison, Outcome format. We used the CHEST Expert Cough Panel's protocol for the systematic reviews and the American College of Chest Physicians (CHEST) methodological guidelines and Grading of Recommendations Assessment, Development and Evaluation framework. Data from the systematic reviews in conjunction with patients' values and preferences and the clinical context were used to form recommendations. Delphi methodology was used to obtain the final grading. Combining data from systematic reviews addressing five KQs, we found high-quality evidence that a systematic approach to the management of chronic cough improves clinical outcomes. Although there was evidence from several pathways, the highest evidence was from the use of the CHEST approach. However, there was no or little evidence to address some of the KQs posed. Compared with the 2006 Cough Guidelines, there is now high-quality evidence that in children aged ≤ 14 years with chronic cough (> 4 weeks' duration), the use of cough management protocols (or algorithms) improves clinical outcomes, and cough management or testing algorithms should differ depending on the associated characteristics of the cough and clinical history. A chest radiograph and, when age appropriate, spirometry (pre- and post-β 2 agonist) should be undertaken. Other tests should not be routinely performed and undertaken in accordance with the clinical setting and the child's clinical symptoms and signs (eg, tests for tuberculosis when the child has been exposed). Copyright © 2017 American College of Chest Physicians. All rights reserved.

  20. Style-based classification of Chinese ink and wash paintings

    NASA Astrophysics Data System (ADS)

    Sheng, Jiachuan; Jiang, Jianmin

    2013-09-01

    Following the fact that a large collection of ink and wash paintings (IWP) is being digitized and made available on the Internet, their automated content description, analysis, and management are attracting attention across research communities. While existing research in relevant areas is primarily focused on image processing approaches, a style-based algorithm is proposed to classify IWPs automatically by their authors. As IWPs do not have colors or even tones, the proposed algorithm applies edge detection to locate the local region and detect painting strokes to enable histogram-based feature extraction and capture of important cues to reflect the styles of different artists. Such features are then applied to drive a number of neural networks in parallel to complete the classification, and an information entropy balanced fusion is proposed to make an integrated decision for the multiple neural network classification results in which the entropy is used as a pointer to combine the global and local features. Evaluations via experiments support that the proposed algorithm achieves good performances, providing excellent potential for computerized analysis and management of IWPs.

  1. Evaluation of genotype-guided acenocoumarol dosing algorithms in Russian patients.

    PubMed

    Sychev, Dmitriy Alexeyevich; Rozhkov, Aleksandr Vladimirovich; Ananichuk, Anna Viktorovna; Kazakov, Ruslan Evgenyevich

    2017-05-24

    Acenocoumarol dose is normally determined via step-by-step adjustment process based on International Normalized Ratio (INR) measurements. During this time, the risk of adverse reactions is especially high. Several genotype-based acenocoumarol dosing algorithms have been created to predict ideal doses at the start of anticoagulant therapy. Nine dosing algorithms were selected through a literature search. These were evaluated using a cohort of 63 patients with atrial fibrillation receiving acenocoumarol therapy. None of the existing algorithms could predict the ideal acenocoumarol dose in 50% of Russian patients. The Wolkanin-Bartnik algorithtm based on European population was the best-performing one with the highest correlation values (r=0.397), mean absolute error (MAE) 0.82 (±0.61). EU-PACT also managed to give an estimate within the ideal range in 43% of the cases. The two least accurate results were yielded by the Indian population-based algorithms. Among patients receiving amiodarone, algorithms by Schie and Tong proved to be the most effective with the MAE of 0.48±0.42 mg/day and 0.56±0.31 mg/day, respectively. Patient ethnicity and amiodarone intake are factors that must be considered when building future algorithms. Further research is required to find the perfect dosing formula of acenocoumarol maintenance doses in Russian patients.

  2. An Overview of a Trajectory-Based Solution for En Route and Terminal Area Self-Spacing to Include Parallel Runway Operations

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.

    2011-01-01

    This paper presents an overview of an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This airborne self-spacing concept is trajectory-based, allowing for spacing operations prior to the aircraft being on a common path. This implementation provides the ability to manage spacing against two traffic aircraft, with one of these aircraft operating to a parallel dependent runway. Because this algorithm is trajectory-based, it also has the inherent ability to support required-time-of-arrival (RTA) operations

  3. Load Balancing Integrated Least Slack Time-Based Appliance Scheduling for Smart Home Energy Management

    PubMed Central

    Silva, Bhagya Nathali; Khan, Murad; Han, Kijun

    2018-01-01

    The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST) algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism. PMID:29495346

  4. A Hardware-Supported Algorithm for Self-Managed and Choreographed Task Execution in Sensor Networks.

    PubMed

    Bordel, Borja; Miguel, Carlos; Alcarria, Ramón; Robles, Tomás

    2018-03-07

    Nowadays, sensor networks are composed of a great number of tiny resource-constraint nodes, whose management is increasingly more complex. In fact, although collaborative or choreographic task execution schemes are which fit in the most perfect way with the nature of sensor networks, they are rarely implemented because of the high resource consumption of these algorithms (especially if networks include many resource-constrained devices). On the contrary, hierarchical networks are usually designed, in whose cusp it is included a heavy orchestrator with a remarkable processing power, being able to implement any necessary management solution. However, although this orchestration approach solves most practical management problems of sensor networks, a great amount of the operation time is wasted while nodes request the orchestrator to address a conflict and they obtain the required instructions to operate. Therefore, in this paper it is proposed a new mechanism for self-managed and choreographed task execution in sensor networks. The proposed solution considers only a lightweight gateway instead of traditional heavy orchestrators and a hardware-supported algorithm, which consume a negligible amount of resources in sensor nodes. The gateway avoids the congestion of the entire sensor network and the hardware-supported algorithm enables a choreographed task execution scheme, so no particular node is overloaded. The performance of the proposed solution is evaluated through numerical and electronic ModelSim-based simulations.

  5. A Hardware-Supported Algorithm for Self-Managed and Choreographed Task Execution in Sensor Networks

    PubMed Central

    2018-01-01

    Nowadays, sensor networks are composed of a great number of tiny resource-constraint nodes, whose management is increasingly more complex. In fact, although collaborative or choreographic task execution schemes are which fit in the most perfect way with the nature of sensor networks, they are rarely implemented because of the high resource consumption of these algorithms (especially if networks include many resource-constrained devices). On the contrary, hierarchical networks are usually designed, in whose cusp it is included a heavy orchestrator with a remarkable processing power, being able to implement any necessary management solution. However, although this orchestration approach solves most practical management problems of sensor networks, a great amount of the operation time is wasted while nodes request the orchestrator to address a conflict and they obtain the required instructions to operate. Therefore, in this paper it is proposed a new mechanism for self-managed and choreographed task execution in sensor networks. The proposed solution considers only a lightweight gateway instead of traditional heavy orchestrators and a hardware-supported algorithm, which consume a negligible amount of resources in sensor nodes. The gateway avoids the congestion of the entire sensor network and the hardware-supported algorithm enables a choreographed task execution scheme, so no particular node is overloaded. The performance of the proposed solution is evaluated through numerical and electronic ModelSim-based simulations. PMID:29518986

  6. A multiobjective optimization model and an orthogonal design-based hybrid heuristic algorithm for regional urban mining management problems.

    PubMed

    Wu, Hao; Wan, Zhong

    2018-02-01

    In this paper, a multiobjective mixed-integer piecewise nonlinear programming model (MOMIPNLP) is built to formulate the management problem of urban mining system, where the decision variables are associated with buy-back pricing, choices of sites, transportation planning, and adjustment of production capacity. Different from the existing approaches, the social negative effect, generated from structural optimization of the recycling system, is minimized in our model, as well as the total recycling profit and utility from environmental improvement are jointly maximized. For solving the problem, the MOMIPNLP model is first transformed into an ordinary mixed-integer nonlinear programming model by variable substitution such that the piecewise feature of the model is removed. Then, based on technique of orthogonal design, a hybrid heuristic algorithm is developed to find an approximate Pareto-optimal solution, where genetic algorithm is used to optimize the structure of search neighborhood, and both local branching algorithm and relaxation-induced neighborhood search algorithm are employed to cut the searching branches and reduce the number of variables in each branch. Numerical experiments indicate that this algorithm spends less CPU (central processing unit) time in solving large-scale regional urban mining management problems, especially in comparison with the similar ones available in literature. By case study and sensitivity analysis, a number of practical managerial implications are revealed from the model. Since the metal stocks in society are reliable overground mineral sources, urban mining has been paid great attention as emerging strategic resources in an era of resource shortage. By mathematical modeling and development of efficient algorithms, this paper provides decision makers with useful suggestions on the optimal design of recycling system in urban mining. For example, this paper can answer how to encourage enterprises to join the recycling activities by government's support and subsidies, whether the existing recycling system can meet the developmental requirements or not, and what is a reasonable adjustment of production capacity.

  7. The research of network database security technology based on web service

    NASA Astrophysics Data System (ADS)

    Meng, Fanxing; Wen, Xiumei; Gao, Liting; Pang, Hui; Wang, Qinglin

    2013-03-01

    Database technology is one of the most widely applied computer technologies, its security is becoming more and more important. This paper introduced the database security, network database security level, studies the security technology of the network database, analyzes emphatically sub-key encryption algorithm, applies this algorithm into the campus-one-card system successfully. The realization process of the encryption algorithm is discussed, this method is widely used as reference in many fields, particularly in management information system security and e-commerce.

  8. Research of information classification and strategy intelligence extract algorithm based on military strategy hall

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Li, Dehua; Yang, Jie

    2007-12-01

    Constructing virtual international strategy environment needs many kinds of information, such as economy, politic, military, diploma, culture, science, etc. So it is very important to build an information auto-extract, classification, recombination and analysis management system with high efficiency as the foundation and component of military strategy hall. This paper firstly use improved Boost algorithm to classify obtained initial information, then use a strategy intelligence extract algorithm to extract strategy intelligence from initial information to help strategist to analysis information.

  9. Development of a South African integrated syndromic respiratory disease guideline for primary care.

    PubMed

    English, René G; Bateman, Eric D; Zwarenstein, Merrick F; Fairall, Lara R; Bheekie, Angeni; Bachmann, Max O; Majara, Bosielo; Ottmani, Salah-Eddine; Scherpbier, Robert W

    2008-09-01

    The Practical Approach to Lung Health in South Africa (PALSA) initiative aimed to develop an integrated symptom- and sign-based (syndromic) respiratory disease guideline for nurse care practitioners working in primary care in a developing country. A multidisciplinary team developed the guideline after reviewing local barriers to respiratory health care provision, relevant health care policies, existing respiratory guidelines, and literature. Guideline drafts were evaluated by means of focus group discussions. Existing evidence-based guideline development methodologies were tailored for development of the guideline. A locally-applicable guideline based on syndromic diagnostic algorithms was developed for the management of patients 15 years and older who presented to primary care facilities with cough or difficulty breathing. PALSA has developed a guideline that integrates and presents diagnostic and management recommendations for priority respiratory diseases in adults using a symptom- and sign-based algorithmic guideline for nurses in developing countries.

  10. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    NASA Astrophysics Data System (ADS)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  11. Research on Environmental Adjustment of Cloud Ranch Based on BP Neural Network PID Control

    NASA Astrophysics Data System (ADS)

    Ren, Jinzhi; Xiang, Wei; Zhao, Lin; Wu, Jianbo; Huang, Lianzhen; Tu, Qinggang; Zhao, Heming

    2018-01-01

    In order to make the intelligent ranch management mode replace the traditional artificial one gradually, this paper proposes a pasture environment control system based on cloud server, and puts forward the PID control algorithm based on BP neural network to control temperature and humidity better in the pasture environment. First, to model the temperature and humidity (controlled object) of the pasture, we can get the transfer function. Then the traditional PID control algorithm and the PID one based on BP neural network are applied to the transfer function. The obtained step tracking curves can be seen that the PID controller based on BP neural network has obvious superiority in adjusting time and error, etc. This algorithm, calculating reasonable control parameters of the temperature and humidity to control environment, can be better used in the cloud service platform.

  12. Research on the method of information system risk state estimation based on clustering particle filter

    NASA Astrophysics Data System (ADS)

    Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua

    2017-05-01

    With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  13. ICESat Science Investigator led Processing System (I-SIPS)

    NASA Astrophysics Data System (ADS)

    Bhardwaj, S.; Bay, J.; Brenner, A.; Dimarzio, J.; Hancock, D.; Sherman, M.

    2003-12-01

    The ICESat Science Investigator-led Processing System (I-SIPS) generates the GLAS standard data products. It consists of two main parts the Scheduling and Data Management System (SDMS) and the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software. The system has been operational since the successful launch of ICESat. It ingests data from the GLAS instrument, generates GLAS data products, and distributes them to the GLAS Science Computing Facility (SCF), the Instrument Support Facility (ISF) and the National Snow and Ice Data Center (NSIDC) ECS DAAC. The SDMS is the Planning, Scheduling and Data Management System that runs the GLAS Science Algorithm Software (GSAS). GSAS is based on the Algorithm Theoretical Basis Documents provided by the Science Team and is developed independently of SDMS. The SDMS provides the processing environment to plan jobs based on existing data, control job flow, data distribution, and archiving. The SDMS design is based on a mission-independent architecture that imposes few constraints on the science code thereby facilitating I-SIPS integration. I-SIPS currently works in an autonomous manner to ingest GLAS instrument data, distribute this data to the ISF, run the science processing algorithms to produce the GLAS standard products, reprocess data when new versions of science algorithms are released, and distributes the products to the SCF, ISF, and NSIDC. I-SIPS has a proven performance record, delivering the data to the SCF within hours after the initial instrument activation. The I-SIPS design philosophy gives this system a high potential for reuse in other science missions.

  14. Algorithm for evaluating the effectiveness of a high-rise development project based on current yield

    NASA Astrophysics Data System (ADS)

    Soboleva, Elena

    2018-03-01

    The article is aimed at the issues of operational evaluation of development project efficiency in high-rise construction under the current economic conditions in Russia. The author touches the following issues: problems of implementing development projects, the influence of the operational evaluation quality of high-rise construction projects on general efficiency, assessing the influence of the project's external environment on the effectiveness of project activities under crisis conditions and the quality of project management. The article proposes the algorithm and the methodological approach to the quality management of the developer project efficiency based on operational evaluation of the current yield efficiency. The methodology for calculating the current efficiency of a development project for high-rise construction has been updated.

  15. Data Sufficiency Assessment and Pumping Test Design for Groundwater Prediction Using Decision Theory and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    McPhee, J.; William, Y. W.

    2005-12-01

    This work presents a methodology for pumping test design based on the reliability requirements of a groundwater model. Reliability requirements take into consideration the application of the model results in groundwater management, expressed in this case as a multiobjective management model. The pumping test design is formulated as a mixed-integer nonlinear programming (MINLP) problem and solved using a combination of genetic algorithm (GA) and gradient-based optimization. Bayesian decision theory provides a formal framework for assessing the influence of parameter uncertainty over the reliability of the proposed pumping test. The proposed methodology is useful for selecting a robust design that will outperform all other candidate designs under most potential 'true' states of the system

  16. Virtual management of radiology examinations in the virtual radiology environment using common object request broker architecture services.

    PubMed

    Martinez, R; Rozenblit, J; Cook, J F; Chacko, A K; Timboe, H L

    1999-05-01

    In the Department of Defense (DoD), US Army Medical Command is now embarking on an extremely exciting new project--creating a virtual radiology environment (VRE) for the management of radiology examinations. The business of radiology in the military is therefore being reengineered on several fronts by the VRE Project. In the VRE Project, a set of intelligent agent algorithms determine where examinations are to routed for reading bases on a knowledge base of the entire VRE. The set of algorithms, called the Meta-Manager, is hierarchical and uses object-based communications between medical treatment facilities (MTFs) and medical centers that have digital imaging network picture archiving and communications systems (DIN-PACS) networks. The communications is based on use of common object request broker architecture (CORBA) objects and services to send patient demographics and examination images from DIN-PACS networks in the MTFs to the DIN-PACS networks at the medical centers for diagnosis. The Meta-Manager is also responsible for updating the diagnosis at the originating MTF. CORBA services are used to perform secure message communications between DIN-PACS nodes in the VRE network. The Meta-Manager has a fail-safe architecture that allows the master Meta-Manager function to float to regional Meta-Manager sites in case of server failure. A prototype of the CORBA-based Meta-Manager is being developed by the University of Arizona's Computer Engineering Research Laboratory using the unified modeling language (UML) as a design tool. The prototype will implement the main functions described in the Meta-Manager design specification. The results of this project are expected to reengineer the process of radiology in the military and have extensions to commercial radiology environments.

  17. Prediction based active ramp metering control strategy with mobility and safety assessment

    NASA Astrophysics Data System (ADS)

    Fang, Jie; Tu, Lili

    2018-04-01

    Ramp metering is one of the most direct and efficient motorway traffic flow management measures so as to improve traffic conditions. However, owing to short of traffic conditions prediction, in earlier studies, the impact on traffic flow dynamics of the applied RM control was not quantitatively evaluated. In this study, a RM control algorithm adopting Model Predictive Control (MPC) framework to predict and assess future traffic conditions, which taking both the current traffic conditions and the RM-controlled future traffic states into consideration, was presented. The designed RM control algorithm targets at optimizing the network mobility and safety performance. The designed algorithm is evaluated in a field-data-based simulation. Through comparing the presented algorithm controlled scenario with the uncontrolled scenario, it was proved that the proposed RM control algorithm can effectively relieve the congestion of traffic network with no significant compromises in safety aspect.

  18. ALGOS: the development of a randomized controlled trial testing a case management algorithm designed to reduce suicide risk among suicide attempters

    PubMed Central

    2011-01-01

    Background Suicide attempts (SA) constitute a serious clinical problem. People who attempt suicide are at high risk of further repetition. However, no interventions have been shown to be effective in reducing repetition in this group of patients. Methods/Design Multicentre randomized controlled trial. We examine the effectiveness of «ALGOS algorithm»: an intervention based in a decisional tree of contact type which aims at reducing the incidence of repeated suicide attempt during 6 months. This algorithm of case management comprises the two strategies of intervention that showed a significant reduction in the number of SA repeaters: systematic telephone contact (ineffective in first-attempters) and «Crisis card» (effective only in first-attempters). Participants who are lost from contact and those refusing healthcare, can then benefit from «short letters» or «postcards». Discussion ALGOS algorithm is easily reproducible and inexpensive intervention that will supply the guidelines for assessment and management of a population sometimes in difficulties with healthcare compliance. Furthermore, it will target some of these subgroups of patients by providing specific interventions for optimizing the benefits of case management strategy. Trial Registration The study was registered with the ClinicalTrials.gov Registry; number: NCT01123174. PMID:21194496

  19. The effect of a disease management algorithm and dedicated postacute coronary syndrome clinic on achievement of guideline compliance: results from the parkland acute coronary event treatment study.

    PubMed

    Yorio, Jeff; Viswanathan, Sundeep; See, Raphael; Uchal, Linda; McWhorter, Jo Ann; Spencer, Nali; Murphy, Sabina; Khera, Amit; de Lemos, James A; McGuire, Darren K

    2008-01-01

    The application of disease management algorithms by physician extenders has been shown to improve therapeutic adherence in selected populations. It is unknown whether this strategy would improve adherence to secondary prevention goals after acute coronary syndromes (ACSs) in a largely indigent county hospital setting. Patients admitted for ACS were randomized at the time of discharge to usual follow-up care versus the same care with the addition of a physician extender visit. Physician extender visits were conducted according to a treatment algorithm based on contemporary practice guidelines. Groups were compared using the primary end point of achievement of low-density lipoprotein treatment goals at 3 months after discharge and achievement of additional evidence-based practice goals. One hundred forty consecutive patients were randomized. A similar proportion of patients returned for study follow-up in both groups at 3 months (54 [79%]/68 in the usual care group vs 57 [79%]/72 in the intervention group; P = 0.97). Among those completing the 3-month visit, a low-density lipoprotein cholesterol level less than 100 mg/dL was achieved in 37 (69%) of the usual care patients compared with 35 (57%) of those in the intervention group (P = 0.43). There was no statistical difference in implementation of therapeutic lifestyle changes (smoking cessation, cardiac rehabilitation, or exercise) between groups. Prescription rates of evidence-based therapeutics at 3 months were similar in both groups. The implementation of a post-ACS clinic run by a physician extender applying a disease management algorithm did not measurably improve adherence to evidence-based secondary prevention treatment goals. Despite initially high rates of evidence-based treatment at discharge, adherence with follow-up appointments and sustained implementation of evidence-based therapies remains a significant challenge in this high-risk cohort.

  20. Nocardial scleritis: A case report and a suggested algorithm for disease management based on a literature review.

    PubMed

    Cunha, Laura Pires da; Juncal, Verena; Carvalhaes, Cecília Godoy; Leão, Sylvia Cardoso; Chimara, Erica; Freitas, Denise

    2018-06-01

    To report a case of nocardial scleritis and to propose a logical treatment algorithm based on a literature review. It is important to suspect a nocardial infection when evaluating anterior unilateral scleritis accompanied by multiple purulent or necrotic abscesses, especially in male patients with a history of chronic ocular pain and redness, trauma inflicted by organic materials, or recent ophthalmic surgery. A microbiological investigation is essential. In positive cases, a direct smear reveals weakly acid-fast organisms or Gram-positive, thin, beading and branching filaments. Also, the organism (usually) grows on blood agar and Lowenstein-Jensen plates. An infection can generally be fully resolved by debridement of necrotic areas and application of topical amikacin drops accompanied by systemic sulfamethoxazole-trimethoprim. Together with the case report described, we review data on a total of 43 eyes with nocardial scleritis. Our proposed algorithm may afford a useful understanding of this sight-threatening disease, facilitating easier and faster diagnosis and management.

  1. Optimizing urine drug testing for monitoring medication compliance in pain management.

    PubMed

    Melanson, Stacy E F; Ptolemy, Adam S; Wasan, Ajay D

    2013-12-01

    It can be challenging to successfully monitor medication compliance in pain management. Clinicians and laboratorians need to collaborate to optimize patient care and maximize operational efficiency. The test menu, assay cutoffs, and testing algorithms utilized in the urine drug testing panels should be periodically reviewed and tailored to the patient population to effectively assess compliance and avoid unnecessary testing and cost to the patient. Pain management and pathology collaborated on an important quality improvement initiative to optimize urine drug testing for monitoring medication compliance in pain management. We retrospectively reviewed 18 months of data from our pain management center. We gathered data on test volumes, positivity rates, and the frequency of false positive results. We also reviewed the clinical utility of our testing algorithms, assay cutoffs, and adulterant panel. In addition, the cost of each component was calculated. The positivity rate for ethanol and 3,4-methylenedioxymethamphetamine were <1% so we eliminated this testing from our panel. We also lowered the screening cutoff for cocaine to meet the clinical needs of the pain management center. In addition, we changed our testing algorithm for 6-acetylmorphine, benzodiazepines, and methadone. For example, due the high rate of false negative results using our immunoassay-based benzodiazepine screen, we removed the screening portion of the algorithm and now perform benzodiazepine confirmation up front in all specimens by liquid chromatography-tandem mass spectrometry. Conducting an interdisciplinary quality improvement project allowed us to optimize our testing panel for monitoring medication compliance in pain management and reduce cost. Wiley Periodicals, Inc.

  2. The role of the case manager in a disease management program.

    PubMed

    Huston, Carol J

    2002-01-01

    Disease management programs provide new opportunities and roles for case managers to provide population-based healthcare to the chronically ill. This article identifies common components of disease management programs and examines roles assumed by case managers in disease management programs such as baseline assessment, performing economic analyses of diseases and their respective associated resource utilization, developing and/or implementing care guidelines or algorithms, educational interventions, disease management program implementation, and outcomes assessment. Areas of expertise needed to be an effective case manager in a disease management program are also identified.

  3. The role of the case manager in a disease management program.

    PubMed

    Huston, C J

    2001-01-01

    Disease management programs provide new opportunities and roles for case managers to provide population-based healthcare to the chronically ill. This article identifies common components of disease management programs and examines roles assumed by case managers in disease management programs such as baseline assessment, performing economic analyses of diseases and their respective associated resource utilization, developing and/or implementing care guidelines or algorithms, educational interventions, disease management program implementation, and outcomes assessment. Areas of expertise needed to be an effective case manager in a disease management program are also identified.

  4. Disk storage management for LHCb based on Data Popularity estimator

    NASA Astrophysics Data System (ADS)

    Hushchyn, Mikhail; Charpentier, Philippe; Ustyuzhanin, Andrey

    2015-12-01

    This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data.

  5. Management algorithms for cervical cancer screening and precancer treatment for resource-limited settings.

    PubMed

    Basu, Partha; Meheus, Filip; Chami, Youssef; Hariprasad, Roopa; Zhao, Fanghui; Sankaranarayanan, Rengaswamy

    2017-07-01

    Management algorithms for screen-positive women in cervical cancer prevention programs have undergone substantial changes in recent years. The WHO strongly recommends human papillomavirus (HPV) testing for primary screening, if affordable, or if not, then visual inspection with acetic acid (VIA), and promotes treatment directly following screening through the screen-and-treat approach (one or two clinic visits). While VIA-positive women can be offered immediate ablative treatment based on certain eligibility criteria, HPV-positive women need to undergo subsequent VIA to determine their eligibility. Simpler ablative methods of treatment such as cryotherapy and thermal coagulation have been demonstrated to be effective and to have excellent safety profiles, and these have become integral parts of new management algorithms. The challenges faced by low-resource countries are many and include, from the management perspective, identifying an affordable point-of-care HPV detection test, minimizing over-treatment, and installing an effective information system to ensure high compliance to treatment and follow-up. © 2017 The Authors. International Journal of Gynecology & Obstetrics published by John Wiley & Sons Ltd on behalf of International Federation of Gynecology and Obstetrics.

  6. Agent-based traffic management and reinforcement learning in congested intersection network.

    DOT National Transportation Integrated Search

    2012-08-01

    This study evaluates the performance of traffic control systems based on reinforcement learning (RL), also called approximate dynamic programming (ADP). Two algorithms have been selected for testing: 1) Q-learning and 2) approximate dynamic programmi...

  7. The Diagnostic Challenge Competition: Probabilistic Techniques for Fault Diagnosis in Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Ricks, Brian W.; Mengshoel, Ole J.

    2009-01-01

    Reliable systems health management is an important research area of NASA. A health management system that can accurately and quickly diagnose faults in various on-board systems of a vehicle will play a key role in the success of current and future NASA missions. We introduce in this paper the ProDiagnose algorithm, a diagnostic algorithm that uses a probabilistic approach, accomplished with Bayesian Network models compiled to Arithmetic Circuits, to diagnose these systems. We describe the ProDiagnose algorithm, how it works, and the probabilistic models involved. We show by experimentation on two Electrical Power Systems based on the ADAPT testbed, used in the Diagnostic Challenge Competition (DX 09), that ProDiagnose can produce results with over 96% accuracy and less than 1 second mean diagnostic time.

  8. Knowing 'something is not right' is beyond intuition: development of a clinical algorithm to enhance surveillance and assist nurses to organise and communicate clinical findings.

    PubMed

    Brier, Jessica; Carolyn, Moalem; Haverly, Marsha; Januario, Mary Ellen; Padula, Cynthia; Tal, Ahuva; Triosh, Henia

    2015-03-01

    To develop a clinical algorithm to guide nurses' critical thinking through systematic surveillance, assessment, actions required and communication strategies. To achieve this, an international, multiphase project was initiated. Patients receive hospital care postoperatively because they require the skilled surveillance of nurses. Effective assessment of postoperative patients is essential for early detection of clinical deterioration and optimal care management. Despite the significant amount of time devoted to surveillance activities, there is lack of evidence that nurses use a consistent, systematic approach in surveillance, management and communication, potentially leading to less optimal outcomes. Several explanations for the lack of consistency have been suggested in the literature. Mixed methods approach. Retrospective chart review; semi-structured interviews conducted with expert nurses (n = 10); algorithm development. Themes developed from the semi-structured interviews, including (1) complete, systematic assessment, (2) something is not right (3) validating with others, (4) influencing factors and (5) frustration with lack of response when communicating findings were used as the basis for development of the Surveillance Algorithm for Post-Surgical Patients. The algorithm proved beneficial based on limited use in clinical settings. Further work is needed to fully test it in education and practice. The Surveillance Algorithm for Post-Surgical Patients represents the approach of expert nurses, and serves to guide less expert nurses' observations, critical thinking, actions and communication. Based on this approach, the algorithm assists nurses to develop skills promoting early detection, intervention and communication in cases of patient deterioration. © 2014 John Wiley & Sons Ltd.

  9. Self-Organized Link State Aware Routing for Multiple Mobile Agents in Wireless Network

    NASA Astrophysics Data System (ADS)

    Oda, Akihiro; Nishi, Hiroaki

    Recently, the importance of data sharing structures in autonomous distributed networks has been increasing. A wireless sensor network is used for managing distributed data. This type of distributed network requires effective information exchanging methods for data sharing. To reduce the traffic of broadcasted messages, reduction of the amount of redundant information is indispensable. In order to reduce packet loss in mobile ad-hoc networks, QoS-sensitive routing algorithm have been frequently discussed. The topology of a wireless network is likely to change frequently according to the movement of mobile nodes, radio disturbance, or fading due to the continuous changes in the environment. Therefore, a packet routing algorithm should guarantee QoS by using some quality indicators of the wireless network. In this paper, a novel information exchanging algorithm developed using a hash function and a Boolean operation is proposed. This algorithm achieves efficient information exchanges by reducing the overhead of broadcasting messages, and it can guarantee QoS in a wireless network environment. It can be applied to a routing algorithm in a mobile ad-hoc network. In the proposed routing algorithm, a routing table is constructed by using the received signal strength indicator (RSSI), and the neighborhood information is periodically broadcasted depending on this table. The proposed hash-based routing entry management by using an extended MAC address can eliminate the overhead of message flooding. An analysis of the collision of hash values contributes to the determination of the length of the hash values, which is minimally required. Based on the verification of a mathematical theory, an optimum hash function for determining the length of hash values can be given. Simulations are carried out to evaluate the effectiveness of the proposed algorithm and to validate the theory in a general wireless network routing algorithm.

  10. Granulocyte-colony stimulating factor in the prevention of postoperative infectious complications and sub-optimal recovery from operation in patients with colorectal cancer and increased preoperative risk (ASA 3 and 4). Protocol of a controlled clinical trial developed by consensus of an international study group. Part three: individual patient, complication algorithm and quality manage.

    PubMed

    Stinner, B; Bauhofer, A; Lorenz, W; Rothmund, M; Plaul, U; Torossian, A; Celik, I; Sitter, H; Koller, M; Black, A; Duda, D; Encke, A; Greger, B; van Goor, H; Hanisch, E; Hesterberg, R; Klose, K J; Lacaine, F; Lorijn, R H; Margolis, C; Neugebauer, E; Nyström, P O; Reemst, P H; Schein, M; Solovera, J

    2001-05-01

    Presentation of a new type of a study protocol for evaluation of the effectiveness of an immune modifier (rhG-CSF, filgrastim): prevention of postoperative infectious complications and of sub-optimal recovery from operation in patients with colorectal cancer and increased preoperative risk (ASA 3 and 4). A randomised, placebo controlled, double-blinded, single-centre study is performed at an University Hospital (n = 40 patients for each group). This part presents the course of the individual patient and a complication algorithm for the management of anastomotic leakage and quality management. In part three of the protocol, the three major sections include: The course of the individual patient using a comprehensive graphic display, including the perioperative period, hospital stay and post discharge outcome. A center based clinical practice guideline for the management of the most important postoperative complication--anastomotic leakage--including evidence based support for each step of the algorithm. Data management, ethics and organisational structure. Future studies with immune modifiers will also fail if not better structured (reduction of variance) to achieve uniform patient management in a complex clinical scenario. This new type of a single-centre trial aims to reduce the gap between animal experiments and clinical trials or--if it fails--at least demonstrates new ways for explaining the failures.

  11. Survey of PRT Vehicle Management Algorithms

    DOT National Transportation Integrated Search

    1974-01-01

    The document summarizes the results of a literature survey of state of the art vehicle management algorithms applicable to Personal Rapid Transit Systems(PRT). The surveyed vehicle management algorithms are organized into a set of five major componen...

  12. Distributed autonomous systems: resource management, planning, and control algorithms

    NASA Astrophysics Data System (ADS)

    Smith, James F., III; Nguyen, ThanhVu H.

    2005-05-01

    Distributed autonomous systems, i.e., systems that have separated distributed components, each of which, exhibit some degree of autonomy are increasingly providing solutions to naval and other DoD problems. Recently developed control, planning and resource allocation algorithms for two types of distributed autonomous systems will be discussed. The first distributed autonomous system (DAS) to be discussed consists of a collection of unmanned aerial vehicles (UAVs) that are under fuzzy logic control. The UAVs fly and conduct meteorological sampling in a coordinated fashion determined by their fuzzy logic controllers to determine the atmospheric index of refraction. Once in flight no human intervention is required. A fuzzy planning algorithm determines the optimal trajectory, sampling rate and pattern for the UAVs and an interferometer platform while taking into account risk, reliability, priority for sampling in certain regions, fuel limitations, mission cost, and related uncertainties. The real-time fuzzy control algorithm running on each UAV will give the UAV limited autonomy allowing it to change course immediately without consulting with any commander, request other UAVs to help it, alter its sampling pattern and rate when observing interesting phenomena, or to terminate the mission and return to base. The algorithms developed will be compared to a resource manager (RM) developed for another DAS problem related to electronic attack (EA). This RM is based on fuzzy logic and optimized by evolutionary algorithms. It allows a group of dissimilar platforms to use EA resources distributed throughout the group. For both DAS types significant theoretical and simulation results will be presented.

  13. A novel medical information management and decision model for uncertain demand optimization.

    PubMed

    Bi, Ya

    2015-01-01

    Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.

  14. An optical water type framework for selecting and blending retrievals from bio-optical algorithms in lakes and coastal waters.

    PubMed

    Moore, Timothy S; Dowell, Mark D; Bradt, Shane; Verdu, Antonio Ruiz

    2014-03-05

    Bio-optical models are based on relationships between the spectral remote sensing reflectance and optical properties of in-water constituents. The wavelength range where this information can be exploited changes depending on the water characteristics. In low chlorophyll- a waters, the blue/green region of the spectrum is more sensitive to changes in chlorophyll- a concentration, whereas the red/NIR region becomes more important in turbid and/or eutrophic waters. In this work we present an approach to manage the shift from blue/green ratios to red/NIR-based chlorophyll- a algorithms for optically complex waters. Based on a combined in situ data set of coastal and inland waters, measures of overall algorithm uncertainty were roughly equal for two chlorophyll- a algorithms-the standard NASA OC4 algorithm based on blue/green bands and a MERIS 3-band algorithm based on red/NIR bands-with RMS error of 0.416 and 0.437 for each in log chlorophyll- a units, respectively. However, it is clear that each algorithm performs better at different chlorophyll- a ranges. When a blending approach is used based on an optical water type classification, the overall RMS error was reduced to 0.320. Bias and relative error were also reduced when evaluating the blended chlorophyll- a product compared to either of the single algorithm products. As a demonstration for ocean color applications, the algorithm blending approach was applied to MERIS imagery over Lake Erie. We also examined the use of this approach in several coastal marine environments, and examined the long-term frequency of the OWTs to MODIS-Aqua imagery over Lake Erie.

  15. Multinational evidence-based recommendations for pain management by pharmacotherapy in inflammatory arthritis: integrating systematic literature research and expert opinion of a broad panel of rheumatologists in the 3e Initiative

    PubMed Central

    Colebatch, Alexandra N.; Buchbinder, Rachelle; Edwards, Christopher J.; Adams, Karen; Englbrecht, Matthias; Hazlewood, Glen; Marks, Jonathan L.; Radner, Helga; Ramiro, Sofia; Richards, Bethan L.; Tarner, Ingo H.; Aletaha, Daniel; Bombardier, Claire; Landewé, Robert B.; Müller-Ladner, Ulf; Bijlsma, Johannes W. J.; Branco, Jaime C.; Bykerk, Vivian P.; da Rocha Castelar Pinheiro, Geraldo; Catrina, Anca I.; Hannonen, Pekka; Kiely, Patrick; Leeb, Burkhard; Lie, Elisabeth; Martinez-Osuna, Píndaro; Montecucco, Carlomaurizio; Østergaard, Mikkel; Westhovens, Rene; Zochling, Jane; van der Heijde, Désirée

    2012-01-01

    Objective. To develop evidence-based recommendations for pain management by pharmacotherapy in patients with inflammatory arthritis (IA). Methods. A total of 453 rheumatologists from 17 countries participated in the 2010 3e (Evidence, Expertise, Exchange) Initiative. Using a formal voting process, 89 rheumatologists representing all 17 countries selected 10 clinical questions regarding the use of pain medications in IA. Bibliographic fellows undertook a systematic literature review for each question, using MEDLINE, EMBASE, Cochrane CENTRAL and 2008–09 European League Against Rheumatism (EULAR)/ACR abstracts. Relevant studies were retrieved for data extraction and quality assessment. Rheumatologists from each country used this evidence to develop a set of national recommendations. Multinational recommendations were then formulated and assessed for agreement and the potential impact on clinical practice. Results. A total of 49 242 references were identified, from which 167 studies were included in the systematic reviews. One clinical question regarding different comorbidities was divided into two separate reviews, resulting in 11 recommendations in total. Oxford levels of evidence were applied to each recommendation. The recommendations related to the efficacy and safety of various analgesic medications, pain measurement scales and pain management in the pre-conception period, pregnancy and lactation. Finally, an algorithm for the pharmacological management of pain in IA was developed. Twenty per cent of rheumatologists reported that the algorithm would change their practice, and 75% felt the algorithm was in accordance with their current practice. Conclusions. Eleven evidence-based recommendations on the management of pain by pharmacotherapy in IA were developed. They are supported by a large panel of rheumatologists from 17 countries, thus enhancing their utility in clinical practice. PMID:22447886

  16. Russian guidelines for the management of COPD: algorithm of pharmacologic treatment

    PubMed Central

    Aisanov, Zaurbek; Avdeev, Sergey; Arkhipov, Vladimir; Belevskiy, Andrey; Chuchalin, Alexander; Leshchenko, Igor; Ovcharenko, Svetlana; Shmelev, Evgeny; Miravitlles, Marc

    2018-01-01

    The high prevalence of COPD together with its high level of misdiagnosis and late diagnosis dictate the necessity for the development and implementation of clinical practice guidelines (CPGs) in order to improve the management of this disease. High-quality, evidence-based international CPGs need to be adapted to the particular situation of each country or region. A new version of the Russian Respiratory Society guidelines released at the end of 2016 was based on the proposal by Global Initiative for Obstructive Lung Disease but adapted to the characteristics of the Russian health system and included an algorithm of pharmacologic treatment of COPD. The proposed algorithm had to comply with the requirements of the Russian Ministry of Health to be included into the unified electronic rubricator, which required a balance between the level of information and the simplicity of the graphic design. This was achieved by: exclusion of the initial diagnostic process, grouping together the common pharmacologic and nonpharmacologic measures for all patients, and the decision not to use the letters A–D for simplicity and clarity. At all stages of the treatment algorithm, efficacy and safety have to be carefully assessed. Escalation and de-escalation is possible in the case of lack of or insufficient efficacy or safety issues. Bronchodilators should not be discontinued except in the case of significant side effects. At the same time, inhaled corticosteroid (ICS) withdrawal is not represented in the algorithm, because it was agreed that there is insufficient evidence to establish clear criteria for ICSs discontinuation. Finally, based on the Global Initiative for Obstructive Lung Disease statement, the proposed algorithm reflects and summarizes different approaches to the pharmacological treatment of COPD taking into account the reality of health care in the Russian Federation. PMID:29386887

  17. Russian guidelines for the management of COPD: algorithm of pharmacologic treatment.

    PubMed

    Aisanov, Zaurbek; Avdeev, Sergey; Arkhipov, Vladimir; Belevskiy, Andrey; Chuchalin, Alexander; Leshchenko, Igor; Ovcharenko, Svetlana; Shmelev, Evgeny; Miravitlles, Marc

    2018-01-01

    The high prevalence of COPD together with its high level of misdiagnosis and late diagnosis dictate the necessity for the development and implementation of clinical practice guidelines (CPGs) in order to improve the management of this disease. High-quality, evidence-based international CPGs need to be adapted to the particular situation of each country or region. A new version of the Russian Respiratory Society guidelines released at the end of 2016 was based on the proposal by Global Initiative for Obstructive Lung Disease but adapted to the characteristics of the Russian health system and included an algorithm of pharmacologic treatment of COPD. The proposed algorithm had to comply with the requirements of the Russian Ministry of Health to be included into the unified electronic rubricator, which required a balance between the level of information and the simplicity of the graphic design. This was achieved by: exclusion of the initial diagnostic process, grouping together the common pharmacologic and nonpharmacologic measures for all patients, and the decision not to use the letters A-D for simplicity and clarity. At all stages of the treatment algorithm, efficacy and safety have to be carefully assessed. Escalation and de-escalation is possible in the case of lack of or insufficient efficacy or safety issues. Bronchodilators should not be discontinued except in the case of significant side effects. At the same time, inhaled corticosteroid (ICS) withdrawal is not represented in the algorithm, because it was agreed that there is insufficient evidence to establish clear criteria for ICSs discontinuation. Finally, based on the Global Initiative for Obstructive Lung Disease statement, the proposed algorithm reflects and summarizes different approaches to the pharmacological treatment of COPD taking into account the reality of health care in the Russian Federation.

  18. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship s flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm s design, along with mathematical models of the algorithm s performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  19. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  20. A method of distributed avionics data processing based on SVM classifier

    NASA Astrophysics Data System (ADS)

    Guo, Hangyu; Wang, Jinyan; Kang, Minyang; Xu, Guojing

    2018-03-01

    Under the environment of system combat, in order to solve the problem on management and analysis of the massive heterogeneous data on multi-platform avionics system, this paper proposes a management solution which called avionics "resource cloud" based on big data technology, and designs an aided decision classifier based on SVM algorithm. We design an experiment with STK simulation, the result shows that this method has a high accuracy and a broad application prospect.

  1. Model-based approach for cyber-physical attack detection in water distribution systems.

    PubMed

    Housh, Mashor; Ohar, Ziv

    2018-08-01

    Modern Water Distribution Systems (WDSs) are often controlled by Supervisory Control and Data Acquisition (SCADA) systems and Programmable Logic Controllers (PLCs) which manage their operation and maintain a reliable water supply. As such, and with the cyber layer becoming a central component of WDS operations, these systems are at a greater risk of being subjected to cyberattacks. This paper offers a model-based methodology based on a detailed hydraulic understanding of WDSs combined with an anomaly detection algorithm for the identification of complex cyberattacks that cannot be fully identified by hydraulically based rules alone. The results show that the proposed algorithm is capable of achieving the best-known performance when tested on the data published in the BATtle of the Attack Detection ALgorithms (BATADAL) competition (http://www.batadal.net). Copyright © 2018. Published by Elsevier Ltd.

  2. The Normalized-Rate Iterative Algorithm: A Practical Dynamic Spectrum Management Method for DSL

    NASA Astrophysics Data System (ADS)

    Statovci, Driton; Nordström, Tomas; Nilsson, Rickard

    2006-12-01

    We present a practical solution for dynamic spectrum management (DSM) in digital subscriber line systems: the normalized-rate iterative algorithm (NRIA). Supported by a novel optimization problem formulation, the NRIA is the only DSM algorithm that jointly addresses spectrum balancing for frequency division duplexing systems and power allocation for the users sharing a common cable bundle. With a focus on being implementable rather than obtaining the highest possible theoretical performance, the NRIA is designed to efficiently solve the DSM optimization problem with the operators' business models in mind. This is achieved with the help of two types of parameters: the desired network asymmetry and the desired user priorities. The NRIA is a centralized DSM algorithm based on the iterative water-filling algorithm (IWFA) for finding efficient power allocations, but extends the IWFA by finding the achievable bitrates and by optimizing the bandplan. It is compared with three other DSM proposals: the IWFA, the optimal spectrum balancing algorithm (OSBA), and the bidirectional IWFA (bi-IWFA). We show that the NRIA achieves better bitrate performance than the IWFA and the bi-IWFA. It can even achieve performance almost as good as the OSBA, but with dramatically lower requirements on complexity. Additionally, the NRIA can achieve bitrate combinations that cannot be supported by any other DSM algorithm.

  3. Health management system for rocket engines

    NASA Technical Reports Server (NTRS)

    Nemeth, Edward

    1990-01-01

    The functional framework of a failure detection algorithm for the Space Shuttle Main Engine (SSME) is developed. The basic algorithm is based only on existing SSME measurements. Supplemental measurements, expected to enhance failure detection effectiveness, are identified. To support the algorithm development, a figure of merit is defined to estimate the likelihood of SSME criticality 1 failure modes and the failure modes are ranked in order of likelihood of occurrence. Nine classes of failure detection strategies are evaluated and promising features are extracted as the basis for the failure detection algorithm. The failure detection algorithm provides early warning capabilities for a wide variety of SSME failure modes. Preliminary algorithm evaluation, using data from three SSME failures representing three different failure types, demonstrated indications of imminent catastrophic failure well in advance of redline cutoff in all three cases.

  4. Pretest probability of a normal echocardiography: validation of a simple and practical algorithm for routine use.

    PubMed

    Hammoudi, Nadjib; Duprey, Matthieu; Régnier, Philippe; Achkar, Marc; Boubrit, Lila; Preud'homme, Gisèle; Healy-Brucker, Aude; Vignalou, Jean-Baptiste; Pousset, Françoise; Komajda, Michel; Isnard, Richard

    2014-02-01

    Management of increased referrals for transthoracic echocardiography (TTE) examinations is a challenge. Patients with normal TTE examinations take less time to explore than those with heart abnormalities. A reliable method for assessing pretest probability of a normal TTE may optimize management of requests. To establish and validate, based on requests for examinations, a simple algorithm for defining pretest probability of a normal TTE. In a retrospective phase, factors associated with normality were investigated and an algorithm was designed. In a prospective phase, patients were classified in accordance with the algorithm as being at high or low probability of having a normal TTE. In the retrospective phase, 42% of 618 examinations were normal. In multivariable analysis, age and absence of cardiac history were associated to normality. Low pretest probability of normal TTE was defined by known cardiac history or, in case of doubt about cardiac history, by age>70 years. In the prospective phase, the prevalences of normality were 72% and 25% in high (n=167) and low (n=241) pretest probability of normality groups, respectively. The mean duration of normal examinations was significantly shorter than abnormal examinations (13.8 ± 9.2 min vs 17.6 ± 11.1 min; P=0.0003). A simple algorithm can classify patients referred for TTE as being at high or low pretest probability of having a normal examination. This algorithm might help to optimize management of requests in routine practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  5. [Clinical study using activity-based costing to assess cost-effectiveness of a wound management system utilizing modern dressings in comparison with traditional wound care].

    PubMed

    Ohura, Takehiko; Sanada, Hiromi; Mino, Yoshio

    2004-01-01

    In recent years, the concept of cost-effectiveness, including medical delivery and health service fee systems, has become widespread in Japanese health care. In the field of pressure ulcer management, the recent introduction of penalty subtraction in the care fee system emphasizes the need for prevention and cost-effective care of pressure ulcer. Previous cost-effectiveness research on pressure ulcer management tended to focus only on "hardware" costs such as those for pharmaceuticals and medical supplies, while neglecting other cost aspects, particularly those involving the cost of labor. Thus, cost-effectiveness in pressure ulcer care has not yet been fully established. To provide true cost effectiveness data, a comparative prospective study was initiated in patients with stage II and III pressure ulcers. Considering the potential impact of the pressure reduction mattress on clinical outcome, in particular, the same type of pressure reduction mattresses are utilized in all the cases in the study. The cost analysis method used was Activity-Based Costing, which measures material and labor cost aspects on a daily basis. A reduction in the Pressure Sore Status Tool (PSST) score was used to measure clinical effectiveness. Patients were divided into three groups based on the treatment method and on the use of a consistent algorithm of wound care: 1. MC/A group, modern dressings with a treatment algorithm (control cohort). 2. TC/A group, traditional care (ointment and gauze) with a treatment algorithm. 3. TC/NA group, traditional care (ointment and gauze) without a treatment algorithm. The results revealed that MC/A is more cost-effective than both TC/A and TC/NA. This suggests that appropriate utilization of modern dressing materials and a pressure ulcer care algorithm would contribute to reducing health care costs, improved clinical results, and, ultimately, greater cost-effectiveness.

  6. Antineoplastic agents extravasation from peripheral intravenous line in children: a simple strategy for a safer nursing care.

    PubMed

    Chanes, Daniella Cristina; da Luz Gonçalves Pedreira, Mavilde; de Gutiérrez, Maria Gaby Rivero

    2012-02-01

    The antineoplastic agents infusion through peripheral lines may lead to several adverse events such as extravasation that is one of the most severe acute reactions of this sort of treatment. The extravasation prevention and management must be part of a safe and evidence-based nursing care. Due to this fact, two algorithms were developed with the purpose of guiding nursing care to children who undergo chemotherapy through peripheral line. The objectives of this study were to determine the content validity of both algorithms with pediatric oncology nurses in Brazil and United States of America, and to verify the agreement between the evaluations of both groups. A descriptive validation study was carried out through the Delphi Technique that has the following steps: development of the data collection instrument, application to the specialists, data analysis, algorithms' review, re-evaluation by the specialists, final data analysis and content validity determination. The data analysis was descriptive and based on the specialists agreement consensus equal or higher than 80% in every step of the algorithms. The process showed that the agreement with both instruments ranged from 92.8% to 99.0%. The algorithms are valid for application in nursing care with the main purpose of preventing and managing the antineoplastic agents' extravasation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. A mobile agent-based moving objects indexing algorithm in location based service

    NASA Astrophysics Data System (ADS)

    Fang, Zhixiang; Li, Qingquan; Xu, Hong

    2006-10-01

    This paper will extends the advantages of location based service, specifically using their ability to management and indexing the positions of moving object, Moreover with this objective in mind, a mobile agent-based moving objects indexing algorithm is proposed in this paper to efficiently process indexing request and acclimatize itself to limitation of location based service environment. The prominent feature of this structure is viewing moving object's behavior as the mobile agent's span, the unique mapping between the geographical position of moving objects and span point of mobile agent is built to maintain the close relationship of them, and is significant clue for mobile agent-based moving objects indexing to tracking moving objects.

  8. Bayesian inference and decision theory - A framework for decision making in natural resource management

    USGS Publications Warehouse

    Dorazio, R.M.; Johnson, F.A.

    2003-01-01

    Bayesian inference and decision theory may be used in the solution of relatively complex problems of natural resource management, owing to recent advances in statistical theory and computing. In particular, Markov chain Monte Carlo algorithms provide a computational framework for fitting models of adequate complexity and for evaluating the expected consequences of alternative management actions. We illustrate these features using an example based on management of waterfowl habitat.

  9. Managing Network Partitions in Structured P2P Networks

    NASA Astrophysics Data System (ADS)

    Shafaat, Tallat M.; Ghodsi, Ali; Haridi, Seif

    Structured overlay networks form a major class of peer-to-peer systems, which are touted for their abilities to scale, tolerate failures, and self-manage. Any long-lived Internet-scale distributed system is destined to face network partitions. Consequently, the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems. This makes it a crucial requirement for building any structured peer-to-peer systems to be resilient to network partitions. Although the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems, it has hardly been studied in the context of structured peer-to-peer systems. Structured overlays have mainly been studied under churn (frequent joins/failures), which as a side effect solves the problem of network partitions, as it is similar to massive node failures. Yet, the crucial aspect of network mergers has been ignored. In fact, it has been claimed that ring-based structured overlay networks, which constitute the majority of the structured overlays, are intrinsically ill-suited for merging rings. In this chapter, we motivate the problem of network partitions and mergers in structured overlays. We discuss how a structured overlay can automatically detect a network partition and merger. We present an algorithm for merging multiple similar ring-based overlays when the underlying network merges. We examine the solution in dynamic conditions, showing how our solution is resilient to churn during the merger, something widely believed to be difficult or impossible. We evaluate the algorithm for various scenarios and show that even when falsely detecting a merger, the algorithm quickly terminates and does not clutter the network with many messages. The algorithm is flexible as the tradeoff between message complexity and time complexity can be adjusted by a parameter.

  10. Technologies for network-centric C4ISR

    NASA Astrophysics Data System (ADS)

    Dunkelberger, Kirk A.

    2003-07-01

    Three technologies form the heart of any network-centric command, control, communication, intelligence, surveillance, and reconnaissance (C4ISR) system: distributed processing, reconfigurable networking, and distributed resource management. Distributed processing, enabled by automated federation, mobile code, intelligent process allocation, dynamic multiprocessing groups, check pointing, and other capabilities creates a virtual peer-to-peer computing network across the force. Reconfigurable networking, consisting of content-based information exchange, dynamic ad-hoc routing, information operations (perception management) and other component technologies forms the interconnect fabric for fault tolerant inter processor and node communication. Distributed resource management, which provides the means for distributed cooperative sensor management, foe sensor utilization, opportunistic collection, symbiotic inductive/deductive reasoning and other applications provides the canonical algorithms for network-centric enterprises and warfare. This paper introduces these three core technologies and briefly discusses a sampling of their component technologies and their individual contributions to network-centric enterprises and warfare. Based on the implied requirements, two new algorithms are defined and characterized which provide critical building blocks for network centricity: distributed asynchronous auctioning and predictive dynamic source routing. The first provides a reliable, efficient, effective approach for near-optimal assignment problems; the algorithm has been demonstrated to be a viable implementation for ad-hoc command and control, object/sensor pairing, and weapon/target assignment. The second is founded on traditional dynamic source routing (from mobile ad-hoc networking), but leverages the results of ad-hoc command and control (from the contributed auctioning algorithm) into significant increases in connection reliability through forward prediction. Emphasis is placed on the advantages gained from the closed-loop interaction of the multiple technologies in the network-centric application environment.

  11. Dynamic airspace configuration algorithms for next generation air transportation system

    NASA Astrophysics Data System (ADS)

    Wei, Jian

    The National Airspace System (NAS) is under great pressure to safely and efficiently handle the record-high air traffic volume nowadays, and will face even greater challenge to keep pace with the steady increase of future air travel demand, since the air travel demand is projected to increase to two to three times the current level by 2025. The inefficiency of traffic flow management initiatives causes severe airspace congestion and frequent flight delays, which cost billions of economic losses every year. To address the increasingly severe airspace congestion and delays, the Next Generation Air Transportation System (NextGen) is proposed to transform the current static and rigid radar based system to a dynamic and flexible satellite based system. New operational concepts such as Dynamic Airspace Configuration (DAC) have been under development to allow more flexibility required to mitigate the demand-capacity imbalances in order to increase the throughput of the entire NAS. In this dissertation, we address the DAC problem in the en route and terminal airspace under the framework of NextGen. We develop a series of algorithms to facilitate the implementation of innovative concepts relevant with DAC in both the en route and terminal airspace. We also develop a performance evaluation framework for comprehensive benefit analyses on different aspects of future sector design algorithms. First, we complete a graph based sectorization algorithm for DAC in the en route airspace, which models the underlying air route network with a weighted graph, converts the sectorization problem into the graph partition problem, partitions the weighted graph with an iterative spectral bipartition method, and constructs the sectors from the partitioned graph. The algorithm uses a graph model to accurately capture the complex traffic patterns of the real flights, and generates sectors with high efficiency while evenly distributing the workload among the generated sectors. We further improve the robustness and efficiency of the graph based DAC algorithm by incorporating the Multilevel Graph Partitioning (MGP) method into the graph model, and develop a MGP based sectorization algorithm for DAC in the en route airspace. In a comprehensive benefit analysis, the performance of the proposed algorithms are tested in numerical simulations with Enhanced Traffic Management System (ETMS) data. Simulation results demonstrate that the algorithmically generated sectorizations outperform the current sectorizations in different sectors for different time periods. Secondly, based on our experience with DAC in the en route airspace, we further study the sectorization problem for DAC in the terminal airspace. The differences between the en route and terminal airspace are identified, and their influence on the terminal sectorization is analyzed. After adjusting the graph model to better capture the unique characteristics of the terminal airspace and the requirements of terminal sectorization, we develop a graph based geometric sectorization algorithm for DAC in the terminal airspace. Moreover, the graph based model is combined with the region based sector design method to better handle the complicated geometric and operational constraints in the terminal sectorization problem. In the benefit analysis, we identify the contributing factors to terminal controller workload, define evaluation metrics, and develop a bebefit analysis framework for terminal sectorization evaluation. With the evaluation framework developed, we demonstrate the improvements on the current sectorizations with real traffic data collected from several major international airports in the U.S., and conduct a detailed analysis on the potential benefits of dynamic reconfiguration in the terminal airspace. Finally, in addition to the research on the macroscopic behavior of a large number of aircraft, we also study the dynamical behavior of individual aircraft from the perspective of traffic flow management. We formulate the mode-confusion problem as hybrid estimation problem, and develop a state estimation algorithm for the linear hybrid system with continuous-state-dependent transitions based on sparse observations. We also develop an estimated time of arrival prediction algorithm based on the state-dependent transition hybrid estimation algorithm, whose performance is demonstrated with simulations on the landing procedure following the Continuous Descend Approach (CDA) profile.

  12. A Distributed Wireless Camera System for the Management of Parking Spaces.

    PubMed

    Vítek, Stanislav; Melničuk, Petr

    2017-12-28

    The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces.

  13. Perioperative management of endocrine insufficiency after total pancreatectomy for neoplasia.

    PubMed

    Maker, Ajay V; Sheikh, Raashid; Bhagia, Vinita

    2017-09-01

    Indications for total pancreatectomy (TP) have increased, including for diffuse main duct intrapapillary mucinous neoplasms of the pancreas and malignancy; therefore, the need persists for surgeons to develop appropriate endocrine post-operative management strategies. The brittle diabetes after TP differs from type 1/2 diabetes in that patients have absolute deficiency of insulin and functional glucagon. This makes glucose management challenging, complicates recovery, and predisposes to hospital readmissions. This article aims to define the disease, describe the cause for its occurrence, review the anatomy of the endocrine pancreas, and explain how this condition differs from diabetes mellitus in the setting of post-operative management. The morbidity and mortality of post-TP endocrine insufficiency and practical treatment strategies are systematically reviewed from the literature. Finally, an evidence-based treatment algorithm is created for the practicing pancreatic surgeon and their care team of endocrinologists to aid in managing these complex patients. A PubMed, Science Citation Index/Social sciences Citation Index, and Cochrane Evidence-Based Medicine database search was undertaken along with extensive backward search of the references of published articles to identify studies evaluating endocrine morbidity and treatment after TP and to establish an evidence-based treatment strategy. Indications for TP and the etiology of pancreatogenic diabetes are reviewed. After TP, ~80% patients develop hypoglycemic episodes and 40% experience severe hypoglycemia, resulting in 0-8% mortality and 25-45% morbidity. Referral to a nutritionist and endocrinologist for patient education before surgery followed by surgical reevaluation to determine if the patient has the appropriate understanding, support, and resources preoperatively has significantly reduced morbidity and mortality. The use of modern recombinant long-acting insulin analogues, continuous subcutaneous insulin infusion, and glucagon rescue therapy has greatly improved management in the modern era and constitute the current standard of care. A simple immediate post-operative algorithm was constructed. Successful perioperative surgical management of total pancreatectomy and resulting pancreatogenic diabetes is critical to achieve acceptable post-operative outcomes, and we review the pertinent literature and provide a simple, evidence-based algorithm for immediate post-resection glycemic control.

  14. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and detection and responses that can be tested in VMET and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM. The plan for VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.

  15. Adherence to a simplified management algorithm reduces morbidity and mortality after penetrating colon injuries: a 15-year experience.

    PubMed

    Sharpe, John P; Magnotti, Louis J; Weinberg, Jordan A; Parks, Nancy A; Maish, George O; Shahan, Charles P; Fabian, Timothy C; Croce, Martin A

    2012-04-01

    Our previous experience with colon injuries suggested that operative decisions based on a defined algorithm improve outcomes. The purpose of this study was to evaluate the validity of this algorithm in the face of an increased incidence of destructive injuries observed in recent years. Consecutive patients with full-thickness penetrating colon injuries over an 8-year period were evaluated. Per algorithm, patients with nondestructive injuries underwent primary repair. Those with destructive wounds underwent resection plus anastomosis in the absence of comorbidities or large pre- or intraoperative transfusion requirements (more than 6 units packed RBCs); otherwise they were diverted. Outcomes from the current study (CS group) were compared with those from the previous study (PS group). There were 252 patients who had full-thickness penetrating colon injuries: 150 (60%) patients had nondestructive colon wounds treated with primary repair and 102 patients (40%) had destructive wounds (CS). Demographics and intraoperative transfusions were similar between CS and PS groups. Of the 102 patients with destructive injuries, 75% underwent resection plus anastomosis and 25% underwent diversion. Despite more destructive injuries managed in the CS group (41% vs 27%), abscess rate (18% vs 27%) and colon-related mortality (1% vs 5%) were lower in the CS. Suture line failure was similar in CS compared with PS (5% vs 7%). Adherence to the algorithm was >90% in the CS (similar to PS). Despite an increase in the incidence of destructive colon injuries, our management algorithm remains valid. Destructive injuries associated with pre- or intraoperative transfusion requirements of more than 6 units packed RBCs and/or significant comorbidities are best managed with diversion. By managing the majority of other destructive injuries with resection plus anastomosis, acceptably low morbidity and mortality can be achieved. Copyright © 2012 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Electronic Health Management

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Saha, Sankalita; Goebel, Kai

    2011-01-01

    Accelerated aging methodologies for electrolytic components have been designed and accelerated aging experiments have been carried out. The methodology is based on imposing electrical and/or thermal overstresses via electrical power cycling in order to mimic the real world operation behavior. Data are collected in-situ and offline in order to periodically characterize the devices' electrical performance as it ages. The data generated through these experiments are meant to provide capability for the validation of prognostic algorithms (both model-based and data-driven). Furthermore, the data allow validation of physics-based and empirical based degradation models for this type of capacitor. A first set of models and algorithms has been designed and tested on the data.

  17. Managing the Sick Child in the Era of Declining Malaria Transmission: Development of ALMANACH, an Electronic Algorithm for Appropriate Use of Antimicrobials.

    PubMed

    Rambaud-Althaus, Clotilde; Shao, Amani Flexson; Kahama-Maro, Judith; Genton, Blaise; d'Acremont, Valérie

    2015-01-01

    To review the available knowledge on epidemiology and diagnoses of acute infections in children aged 2 to 59 months in primary care setting and develop an electronic algorithm for the Integrated Management of Childhood Illness to reach optimal clinical outcome and rational use of medicines. A structured literature review in Medline, Embase and the Cochrane Database of Systematic Review (CDRS) looked for available estimations of diseases prevalence in outpatients aged 2-59 months, and for available evidence on i) accuracy of clinical predictors, and ii) performance of point-of-care tests for targeted diseases. A new algorithm for the management of childhood illness (ALMANACH) was designed based on evidence retrieved and results of a study on etiologies of fever in Tanzanian children outpatients. The major changes in ALMANACH compared to IMCI (2008 version) are the following: i) assessment of 10 danger signs, ii) classification of non-severe children into febrile and non-febrile illness, the latter receiving no antibiotics, iii) classification of pneumonia based on a respiratory rate threshold of 50 assessed twice for febrile children 12-59 months; iv) malaria rapid diagnostic test performed for all febrile children. In the absence of identified source of fever at the end of the assessment, v) urine dipstick performed for febrile children <2 years to consider urinary tract infection, vi) classification of 'possible typhoid' for febrile children >2 years with abdominal tenderness; and lastly vii) classification of 'likely viral infection' in case of negative results. This smartphone-run algorithm based on new evidence and two point-of-care tests should improve the quality of care of <5 year children and lead to more rational use of antimicrobials.

  18. Managing the Sick Child in the Era of Declining Malaria Transmission: Development of ALMANACH, an Electronic Algorithm for Appropriate Use of Antimicrobials

    PubMed Central

    Rambaud-Althaus, Clotilde; Shao, Amani Flexson; Genton, Blaise; d’Acremont, Valérie

    2015-01-01

    Objective To review the available knowledge on epidemiology and diagnoses of acute infections in children aged 2 to 59 months in primary care setting and develop an electronic algorithm for the Integrated Management of Childhood Illness to reach optimal clinical outcome and rational use of medicines. Methods A structured literature review in Medline, Embase and the Cochrane Database of Systematic Review (CDRS) looked for available estimations of diseases prevalence in outpatients aged 2-59 months, and for available evidence on i) accuracy of clinical predictors, and ii) performance of point-of-care tests for targeted diseases. A new algorithm for the management of childhood illness (ALMANACH) was designed based on evidence retrieved and results of a study on etiologies of fever in Tanzanian children outpatients. Findings The major changes in ALMANACH compared to IMCI (2008 version) are the following: i) assessment of 10 danger signs, ii) classification of non-severe children into febrile and non-febrile illness, the latter receiving no antibiotics, iii) classification of pneumonia based on a respiratory rate threshold of 50 assessed twice for febrile children 12-59 months; iv) malaria rapid diagnostic test performed for all febrile children. In the absence of identified source of fever at the end of the assessment, v) urine dipstick performed for febrile children <2years to consider urinary tract infection, vi) classification of ‘possible typhoid’ for febrile children >2 years with abdominal tenderness; and lastly vii) classification of ‘likely viral infection’ in case of negative results. Conclusion This smartphone-run algorithm based on new evidence and two point-of-care tests should improve the quality of care of <5 year children and lead to more rational use of antimicrobials. PMID:26161753

  19. Energy management and cooperation in microgrids

    NASA Astrophysics Data System (ADS)

    Rahbar, Katayoun

    Microgrids are key components of future smart power grids, which integrate distributed renewable energy generators to efficiently serve the load demand locally. However, random and intermittent characteristics of renewable energy generations may hinder the reliable operation of microgrids. This thesis is thus devoted to investigating new strategies for microgrids to optimally manage their energy consumption, energy storage system (ESS) and cooperation in real time to achieve the reliable and cost-effective operation. This thesis starts with a single microgrid system. The optimal energy scheduling and ESS management policy is derived to minimize the energy cost of the microgrid resulting from drawing conventional energy from the main grid under both the off-line and online setups, where the renewable energy generation/load demand are assumed to be non-causally known and causally known at the microgrid, respectively. The proposed online algorithm is designed based on the optimal off-line solution and works under arbitrary (even unknown) realizations of future renewable energy generation/load demand. Therefore, it is more practically applicable as compared to solutions based on conventional techniques such as dynamic programming and stochastic programming that require the prior knowledge of renewable energy generation and load demand realizations/distributions. Next, for a group of microgrids that cooperate in energy management, we study efficient methods for sharing energy among them for both fully and partially cooperative scenarios, where microgrids are of common interests and self-interested, respectively. For the fully cooperative energy management, the off-line optimization problem is first formulated and optimally solved, where a distributed algorithm is proposed to minimize the total (sum) energy cost of microgrids. Inspired by the results obtained from the off-line optimization, efficient online algorithms are proposed for the real-time energy management, which are of low complexity and work given arbitrary realizations of renewable energy generation/load demand. On the other hand, for self-interested microgrids, the partially cooperative energy management is formulated and a distributed algorithm is proposed to optimize the energy cooperation such that energy costs of individual microgrids reduce simultaneously over the case without energy cooperation while limited information is shared among the microgrids and the central controller.

  20. Autonomous sensor manager agents (ASMA)

    NASA Astrophysics Data System (ADS)

    Osadciw, Lisa A.

    2004-04-01

    Autonomous sensor manager agents are presented as an algorithm to perform sensor management within a multisensor fusion network. The design of the hybrid ant system/particle swarm agents is described in detail with some insight into their performance. Although the algorithm is designed for the general sensor management problem, a simulation example involving 2 radar systems is presented. Algorithmic parameters are determined by the size of the region covered by the sensor network, the number of sensors, and the number of parameters to be selected. With straight forward modifications, this algorithm can be adapted for most sensor management problems.

  1. Design and implementation of PAVEMON: A GIS web-based pavement monitoring system based on large amounts of heterogeneous sensors data

    NASA Astrophysics Data System (ADS)

    Shahini Shamsabadi, Salar

    A web-based PAVEment MONitoring system, PAVEMON, is a GIS oriented platform for accommodating, representing, and leveraging data from a multi-modal mobile sensor system. Stated sensor system consists of acoustic, optical, electromagnetic, and GPS sensors and is capable of producing as much as 1 Terabyte of data per day. Multi-channel raw sensor data (microphone, accelerometer, tire pressure sensor, video) and processed results (road profile, crack density, international roughness index, micro texture depth, etc.) are outputs of this sensor system. By correlating the sensor measurements and positioning data collected in tight time synchronization, PAVEMON attaches a spatial component to all the datasets. These spatially indexed outputs are placed into an Oracle database which integrates seamlessly with PAVEMON's web-based system. The web-based system of PAVEMON consists of two major modules: 1) a GIS module for visualizing and spatial analysis of pavement condition information layers, and 2) a decision-support module for managing maintenance and repair (Mℝ) activities and predicting future budget needs. PAVEMON weaves together sensor data with third-party climate and traffic information from the National Oceanic and Atmospheric Administration (NOAA) and Long Term Pavement Performance (LTPP) databases for an organized data driven approach to conduct pavement management activities. PAVEMON deals with heterogeneous and redundant observations by fusing them for jointly-derived higher-confidence results. A prominent example of the fusion algorithms developed within PAVEMON is a data fusion algorithm used for estimating the overall pavement conditions in terms of ASTM's Pavement Condition Index (PCI). PAVEMON predicts PCI by undertaking a statistical fusion approach and selecting a subset of all the sensor measurements. Other fusion algorithms include noise-removal algorithms to remove false negatives in the sensor data in addition to fusion algorithms developed for identifying features on the road. PAVEMON offers an ideal research and monitoring platform for rapid, intelligent and comprehensive evaluation of tomorrow's transportation infrastructure based on up-to-date data from heterogeneous sensor systems.

  2. Test results of flight guidance for fuel conservative descents in a time-based metered air traffic environment. [terminal configured vehicle

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Person, L. H., Jr.

    1981-01-01

    The NASA developed, implemented, and flight tested a flight management algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control. This algorithm provides a 3D path with time control (4D) for the TCV B-737 airplane to make an idle-thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms are described and flight test results are presented.

  3. Prediction of insemination outcomes in Holstein dairy cattle using alternative machine learning algorithms.

    PubMed

    Shahinfar, Saleh; Page, David; Guenther, Jerry; Cabrera, Victor; Fricke, Paul; Weigel, Kent

    2014-02-01

    When making the decision about whether or not to breed a given cow, knowledge about the expected outcome would have an economic impact on profitability of the breeding program and net income of the farm. The outcome of each breeding can be affected by many management and physiological features that vary between farms and interact with each other. Hence, the ability of machine learning algorithms to accommodate complex relationships in the data and missing values for explanatory variables makes these algorithms well suited for investigation of reproduction performance in dairy cattle. The objective of this study was to develop a user-friendly and intuitive on-farm tool to help farmers make reproduction management decisions. Several different machine learning algorithms were applied to predict the insemination outcomes of individual cows based on phenotypic and genotypic data. Data from 26 dairy farms in the Alta Genetics (Watertown, WI) Advantage Progeny Testing Program were used, representing a 10-yr period from 2000 to 2010. Health, reproduction, and production data were extracted from on-farm dairy management software, and estimated breeding values were downloaded from the US Department of Agriculture Agricultural Research Service Animal Improvement Programs Laboratory (Beltsville, MD) database. The edited data set consisted of 129,245 breeding records from primiparous Holstein cows and 195,128 breeding records from multiparous Holstein cows. Each data point in the final data set included 23 and 25 explanatory variables and 1 binary outcome for of 0.756 ± 0.005 and 0.736 ± 0.005 for primiparous and multiparous cows, respectively. The naïve Bayes algorithm, Bayesian network, and decision tree algorithms showed somewhat poorer classification performance. An information-based variable selection procedure identified herd average conception rate, incidence of ketosis, number of previous (failed) inseminations, days in milk at breeding, and mastitis as the most effective explanatory variables in predicting pregnancy outcome. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration

    NASA Astrophysics Data System (ADS)

    Keshavarz, Kasra; Alizadeh, Hossein

    2017-04-01

    Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other hand, knowing that there is no multi-objective optimization mechanism in SUFI-2, are the final estimations Pareto-optimal? Can systematic methods be applied to select the final estimations? Dealing with these questions, a new auto-calibration algorithm was proposed where the uncertainty measures were considered as two objectives to find non-dominated interval estimations of parameters by means of coupling Monte Carlo simulation and Multi-Objective Particle Swarm Optimization. Both the proposed algorithm and SUFI-2 were applied to calibrate parameters of water resources planning model of Helleh river basin, Iran. The model is a comprehensive water quantity-quality model developed in the previous researches using WEAP software in order to analyze the impacts of different water resources management strategies including dam construction, increasing cultivation area, utilization of more efficient irrigation technologies, changing crop pattern, etc. Comparing the Pareto frontier resulted from the proposed auto-calibration algorithm with SUFI-2 results, it was revealed that the new algorithm leads to a better and also continuous Pareto frontier, even though it is more computationally expensive. Finally, Nash and Kalai-Smorodinsky bargaining methods were used to choose compromised interval estimation regarding Pareto frontier.

  5. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring.

    PubMed

    Shu, Tongxin; Xia, Min; Chen, Jiahong; Silva, Clarence de

    2017-11-05

    Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy.

  6. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring

    PubMed Central

    Shu, Tongxin; Xia, Min; Chen, Jiahong; de Silva, Clarence

    2017-01-01

    Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy. PMID:29113087

  7. Cost-effectiveness of the non-laboratory based Framingham algorithm in primary prevention of cardiovascular disease: A simulated analysis of a cohort of African American adults.

    PubMed

    Kariuki, Jacob K; Gona, Philimon; Leveille, Suzanne G; Stuart-Shor, Eileen M; Hayman, Laura L; Cromwell, Jerry

    2018-06-01

    The non-lab Framingham algorithm, which substitute body mass index for lipids in the laboratory based (lab-based) Framingham algorithm, has been validated among African Americans (AAs). However, its cost-effectiveness and economic tradeoffs have not been evaluated. This study examines the incremental cost-effectiveness ratio (ICER) of two cardiovascular disease (CVD) prevention programs guided by the non-lab versus lab-based Framingham algorithm. We simulated the World Health Organization CVD prevention guidelines on a cohort of 2690 AA participants in the Atherosclerosis Risk in Communities (ARIC) cohort. Costs were estimated using Medicare fee schedules (diagnostic tests, drugs & visits), Bureau of Labor Statistics (RN wages), and estimates for managing incident CVD events. Outcomes were assumed to be true positive cases detected at a data driven treatment threshold. Both algorithms had the best balance of sensitivity/specificity at the moderate risk threshold (>10% risk). Over 12years, 82% and 77% of 401 incident CVD events were accurately predicted via the non-lab and lab-based Framingham algorithms, respectively. There were 20 fewer false negative cases in the non-lab approach translating into over $900,000 in savings over 12years. The ICER was -$57,153 for every extra CVD event prevented when using the non-lab algorithm. The approach guided by the non-lab Framingham strategy dominated the lab-based approach with respect to both costs and predictive ability. Consequently, the non-lab Framingham algorithm could potentially provide a highly effective screening tool at lower cost to address the high burden of CVD especially among AA and in resource-constrained settings where lab tests are unavailable. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. An optical water type framework for selecting and blending retrievals from bio-optical algorithms in lakes and coastal waters

    PubMed Central

    Moore, Timothy S.; Dowell, Mark D.; Bradt, Shane; Verdu, Antonio Ruiz

    2014-01-01

    Bio-optical models are based on relationships between the spectral remote sensing reflectance and optical properties of in-water constituents. The wavelength range where this information can be exploited changes depending on the water characteristics. In low chlorophyll-a waters, the blue/green region of the spectrum is more sensitive to changes in chlorophyll-a concentration, whereas the red/NIR region becomes more important in turbid and/or eutrophic waters. In this work we present an approach to manage the shift from blue/green ratios to red/NIR-based chlorophyll-a algorithms for optically complex waters. Based on a combined in situ data set of coastal and inland waters, measures of overall algorithm uncertainty were roughly equal for two chlorophyll-a algorithms—the standard NASA OC4 algorithm based on blue/green bands and a MERIS 3-band algorithm based on red/NIR bands—with RMS error of 0.416 and 0.437 for each in log chlorophyll-a units, respectively. However, it is clear that each algorithm performs better at different chlorophyll-a ranges. When a blending approach is used based on an optical water type classification, the overall RMS error was reduced to 0.320. Bias and relative error were also reduced when evaluating the blended chlorophyll-a product compared to either of the single algorithm products. As a demonstration for ocean color applications, the algorithm blending approach was applied to MERIS imagery over Lake Erie. We also examined the use of this approach in several coastal marine environments, and examined the long-term frequency of the OWTs to MODIS-Aqua imagery over Lake Erie. PMID:24839311

  9. Reducing patient mortality, length of stay and readmissions through machine learning-based sepsis prediction in the emergency department, intensive care unit and hospital floor units

    PubMed Central

    McCoy, Andrea

    2017-01-01

    Introduction Sepsis management is a challenge for hospitals nationwide, as severe sepsis carries high mortality rates and costs the US healthcare system billions of dollars each year. It has been shown that early intervention for patients with severe sepsis and septic shock is associated with higher rates of survival. The Cape Regional Medical Center (CRMC) aimed to improve sepsis-related patient outcomes through a revised sepsis management approach. Methods In collaboration with Dascena, CRMC formed a quality improvement team to implement a machine learning-based sepsis prediction algorithm to identify patients with sepsis earlier. Previously, CRMC assessed all patients for sepsis using twice-daily systemic inflammatory response syndrome screenings, but desired improvements. The quality improvement team worked to implement a machine learning-based algorithm, collect and incorporate feedback, and tailor the system to current hospital workflow. Results Relative to the pre-implementation period, the post-implementation period sepsis-related in-hospital mortality rate decreased by 60.24%, sepsis-related hospital length of stay decreased by 9.55% and sepsis-related 30-day readmission rate decreased by 50.14%. Conclusion The machine learning-based sepsis prediction algorithm improved patient outcomes at CRMC. PMID:29450295

  10. Dynamic virtual machine allocation policy in cloud computing complying with service level agreement using CloudSim

    NASA Astrophysics Data System (ADS)

    Aneri, Parikh; Sumathy, S.

    2017-11-01

    Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.

  11. Towards an unsupervised device for the diagnosis of childhood pneumonia in low resource settings: automatic segmentation of respiratory sounds.

    PubMed

    Sola, J; Braun, F; Muntane, E; Verjus, C; Bertschi, M; Hugon, F; Manzano, S; Benissa, M; Gervaix, A

    2016-08-01

    Pneumonia remains the worldwide leading cause of children mortality under the age of five, with every year 1.4 million deaths. Unfortunately, in low resource settings, very limited diagnostic support aids are provided to point-of-care practitioners. Current UNICEF/WHO case management algorithm relies on the use of a chronometer to manually count breath rates on pediatric patients: there is thus a major need for more sophisticated tools to diagnose pneumonia that increase sensitivity and specificity of breath-rate-based algorithms. These tools should be low cost, and adapted to practitioners with limited training. In this work, a novel concept of unsupervised tool for the diagnosis of childhood pneumonia is presented. The concept relies on the automated analysis of respiratory sounds as recorded by a point-of-care electronic stethoscope. By identifying the presence of auscultation sounds at different chest locations, this diagnostic tool is intended to estimate a pneumonia likelihood score. After presenting the overall architecture of an algorithm to estimate pneumonia scores, the importance of a robust unsupervised method to identify inspiratory and expiratory phases of a respiratory cycle is highlighted. Based on data from an on-going study involving pediatric pneumonia patients, a first algorithm to segment respiratory sounds is suggested. The unsupervised algorithm relies on a Mel-frequency filter bank, a two-step Gaussian Mixture Model (GMM) description of data, and a final Hidden Markov Model (HMM) interpretation of inspiratory-expiratory sequences. Finally, illustrative results on first recruited patients are provided. The presented algorithm opens the doors to a new family of unsupervised respiratory sound analyzers that could improve future versions of case management algorithms for the diagnosis of pneumonia in low-resources settings.

  12. SeqCompress: an algorithm for biological sequence compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers

    PubMed Central

    Shin, Kwang Cheol; Park, Seung Bo; Jo, Geun Sik

    2009-01-01

    In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID) is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA) based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm. PMID:22399942

  14. Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers.

    PubMed

    Shin, Kwang Cheol; Park, Seung Bo; Jo, Geun Sik

    2009-01-01

    In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID) is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA) based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm.

  15. An algorithmic approach for the treatment of severe uncontrolled asthma

    PubMed Central

    Zervas, Eleftherios; Samitas, Konstantinos; Papaioannou, Andriana I.; Bakakos, Petros; Loukides, Stelios; Gaga, Mina

    2018-01-01

    A small subgroup of patients with asthma suffers from severe disease that is either partially controlled or uncontrolled despite intensive, guideline-based treatment. These patients have significantly impaired quality of life and although they constitute <5% of all asthma patients, they are responsible for more than half of asthma-related healthcare costs. Here, we review a definition for severe asthma and present all therapeutic options currently available for these severe asthma patients. Moreover, we suggest a specific algorithmic treatment approach for the management of severe, difficult-to-treat asthma based on specific phenotype characteristics and biomarkers. The diagnosis and management of severe asthma requires specialised experience, time and effort to comprehend the needs and expectations of each individual patient and incorporate those as well as his/her specific phenotype characteristics into the management planning. Although some new treatment options are currently available for these patients, there is still a need for further research into severe asthma and yet more treatment options. PMID:29531957

  16. Are prehospital airway management resources compatible with difficult airway algorithms? A nationwide cross-sectional study of helicopter emergency medical services in Japan.

    PubMed

    Ono, Yuko; Shinohara, Kazuaki; Goto, Aya; Yano, Tetsuhiro; Sato, Lubna; Miyazaki, Hiroyuki; Shimada, Jiro; Tase, Choichiro

    2016-04-01

    Immediate access to the equipment required for difficult airway management (DAM) is vital. However, in Japan, data are scarce regarding the availability of DAM resources in prehospital settings. The purpose of this study was to determine whether Japanese helicopter emergency medical services (HEMS) are adequately equipped to comply with the DAM algorithms of Japanese and American professional anesthesiology societies. This nationwide cross-sectional study was conducted in May 2015. Base hospitals of HEMS were mailed a questionnaire about their airway management equipment and back-up personnel. Outcome measures were (1) call for help, (2) supraglottic airway device (SGA) insertion, (3) verification of tube placement using capnometry, and (4) the establishment of surgical airways, all of which have been endorsed in various airway management guidelines. The criteria defining feasibility were the availability of (1) more than one physician, (2) SGA, (3) capnometry, and (4) a surgical airway device in the prehospital setting. Of the 45 HEMS base hospitals questioned, 42 (93.3 %) returned completed questionnaires. A surgical airway was practicable by all HEMS. However, in the prehospital setting, back-up assistance was available in 14.3 %, SGA in 16.7 %, and capnometry in 66.7 %. No HEMS was capable of all four steps. In Japan, compliance with standard airway management algorithms in prehospital settings remains difficult because of the limited availability of alternative ventilation equipment and back-up personnel. Prehospital health care providers need to consider the risks and benefits of performing endotracheal intubation in environments not conducive to the success of this procedure.

  17. Design and implementation of intelligent electronic warfare decision making algorithm

    NASA Astrophysics Data System (ADS)

    Peng, Hsin-Hsien; Chen, Chang-Kuo; Hsueh, Chi-Shun

    2017-05-01

    Electromagnetic signals and the requirements of timely response have been a rapid growth in modern electronic warfare. Although jammers are limited resources, it is possible to achieve the best electronic warfare efficiency by tactical decisions. This paper proposes the intelligent electronic warfare decision support system. In this work, we develop a novel hybrid algorithm, Digital Pheromone Particle Swarm Optimization, based on Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO) and Shuffled Frog Leaping Algorithm (SFLA). We use PSO to solve the problem and combine the concept of pheromones in ACO to accumulate more useful information in spatial solving process and speed up finding the optimal solution. The proposed algorithm finds the optimal solution in reasonable computation time by using the method of matrix conversion in SFLA. The results indicated that jammer allocation was more effective. The system based on the hybrid algorithm provides electronic warfare commanders with critical information to assist commanders in effectively managing the complex electromagnetic battlefield.

  18. Principles of managing Vancouver type B periprosthetic fractures around cemented polished tapered femoral stems.

    PubMed

    Quah, Conal; Porteous, Matthew; Stephen, Arthur

    2017-05-01

    The management of periprosthetic fractures around total hip replacements is a complex and challenging problem. Getting it right first time is an important factor in reducing the morbidity, mortality and financial burden associated with these injuries. Understanding and applying the basic principles of fracture management helps increase the chance of successful treatment. Based on these principles, we suggest a treatment algorithm for managing periprosthetic fractures around polished tapered femoral stems.

  19. Committee-Based Active Learning for Surrogate-Assisted Particle Swarm Optimization of Expensive Problems.

    PubMed

    Wang, Handing; Jin, Yaochu; Doherty, John

    2017-09-01

    Function evaluations (FEs) of many real-world optimization problems are time or resource consuming, posing a serious challenge to the application of evolutionary algorithms (EAs) to solve these problems. To address this challenge, the research on surrogate-assisted EAs has attracted increasing attention from both academia and industry over the past decades. However, most existing surrogate-assisted EAs (SAEAs) either still require thousands of expensive FEs to obtain acceptable solutions, or are only applied to very low-dimensional problems. In this paper, a novel surrogate-assisted particle swarm optimization (PSO) inspired from committee-based active learning (CAL) is proposed. In the proposed algorithm, a global model management strategy inspired from CAL is developed, which searches for the best and most uncertain solutions according to a surrogate ensemble using a PSO algorithm and evaluates these solutions using the expensive objective function. In addition, a local surrogate model is built around the best solution obtained so far. Then, a PSO algorithm searches on the local surrogate to find its optimum and evaluates it. The evolutionary search using the global model management strategy switches to the local search once no further improvement can be observed, and vice versa. This iterative search process continues until the computational budget is exhausted. Experimental results comparing the proposed algorithm with a few state-of-the-art SAEAs on both benchmark problems up to 30 decision variables as well as an airfoil design problem demonstrate that the proposed algorithm is able to achieve better or competitive solutions with a limited budget of hundreds of exact FEs.

  20. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  1. Global Sensor Management: Military Asset Allocation

    DTIC Science & Technology

    2009-10-06

    solution (referred to as moves). A similar approach has been suggested by Zweben et al. (1993), who use a local search base metaheuristic , specifically...trapped in a local optimum. Hansen and Mladenovic (1998) describe the concept of variable neighborhood local search algorithms , and describe an...Mataric and G.S. Sukhatme (2002). “An incremental deployment algorithm for mobile robot teams,” Proceedings of the 2002 IEEE/RSJ Intl. Conference on

  2. Using laser altimetry-based segmentation to refine automated tree identification in managed forests of the Black Hills, South Dakota

    Treesearch

    Eric Rowell; Carl Selelstad; Lee Vierling; Lloyd Queen; Wayne Sheppard

    2006-01-01

    The success of a local maximum (LM) tree detection algorithm for detecting individual trees from lidar data depends on stand conditions that are often highly variable. A laser height variance and percent canopy cover (PCC) classification is used to segment the landscape by stand condition prior to stem detection. We test the performance of the LM algorithm using canopy...

  3. Computer algorithms and applications used to assist the evaluation and treatment of adolescent idiopathic scoliosis: a review of published articles 2000-2009.

    PubMed

    Phan, Philippe; Mezghani, Neila; Aubin, Carl-Éric; de Guise, Jacques A; Labelle, Hubert

    2011-07-01

    Adolescent idiopathic scoliosis (AIS) is a complex spinal deformity whose assessment and treatment present many challenges. Computer applications have been developed to assist clinicians. A literature review on computer applications used in AIS evaluation and treatment has been undertaken. The algorithms used, their accuracy and clinical usability were analyzed. Computer applications have been used to create new classifications for AIS based on 2D and 3D features, assess scoliosis severity or risk of progression and assist bracing and surgical treatment. It was found that classification accuracy could be improved using computer algorithms that AIS patient follow-up and screening could be done using surface topography thereby limiting radiation and that bracing and surgical treatment could be optimized using simulations. Yet few computer applications are routinely used in clinics. With the development of 3D imaging and databases, huge amounts of clinical and geometrical data need to be taken into consideration when researching and managing AIS. Computer applications based on advanced algorithms will be able to handle tasks that could otherwise not be done which can possibly improve AIS patients' management. Clinically oriented applications and evidence that they can improve current care will be required for their integration in the clinical setting.

  4. Symbolic discrete event system specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.; Chi, Sungdo

    1992-01-01

    Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.

  5. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  6. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunney, Brian T.N.; Anderson, Robert W.

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  7. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM) early in the development lifecycle for the SLS program, NASA formed the M&FM team as part of the Integrated Systems Health Management and Automation Branch under the Spacecraft Vehicle Systems Department at the Marshall Space Flight Center (MSFC). To support the development of the FM algorithms, the VMET developed by the M&FM team provides the ability to integrate the algorithms, perform test cases, and integrate vendor-supplied physics-based launch vehicle (LV) subsystem models. Additionally, the team has developed processes for implementing and validating the M&FM algorithms for concept validation and risk reduction. The flexibility of the VMET capabilities enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS, GNC, and others. One of the principal functions of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software test and validation processes. In any software development process there is inherent risk in the interpretation and implementation of concepts from requirements and test cases into flight software compounded with potential human errors throughout the development and regression testing lifecycle. Risk reduction is addressed by the M&FM group but in particular by the Analysis Team working with other organizations such as S&MA, Structures and Environments, GNC, Orion, Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission (LOM) and Loss of Crew (LOC) probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses to be tested in VMET to ensure reliable failure detection, and confirm responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - the ARINC 6535-partitioned Operating System, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by FSW. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure their effectiveness and performance in the exterior FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.

  8. The application of Firefly algorithm in an Adaptive Emergency Evacuation Centre Management (AEECM) for dynamic relocation of flood victims

    NASA Astrophysics Data System (ADS)

    ChePa, Noraziah; Hashim, Nor Laily; Yusof, Yuhanis; Hussain, Azham

    2016-08-01

    Flood evacuation centre is defined as a temporary location or area of people from disaster particularly flood as a rescue or precautionary measure. Gazetted evacuation centres are normally located at secure places which have small chances from being drowned by flood. However, due to extreme flood several evacuation centres in Kelantan were unexpectedly drowned. Currently, there is no study done on proposing a decision support aid to reallocate victims and resources of the evacuation centre when the situation getting worsens. Therefore, this study proposes a decision aid model to be utilized in realizing an adaptive emergency evacuation centre management system. This study undergoes two main phases; development of algorithm and models, and development of a web-based and mobile app. The proposed model operates using Firefly multi-objective optimization algorithm that creates an optimal schedule for the relocation of victims and resources for an evacuation centre. The proposed decision aid model and the adaptive system can be applied in supporting the National Security Council's respond mechanisms for handling disaster management level II (State level) especially in providing better management of the flood evacuating centres.

  9. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    NASA Astrophysics Data System (ADS)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  10. A Cross-Layer User Centric Vertical Handover Decision Approach Based on MIH Local Triggers

    NASA Astrophysics Data System (ADS)

    Rehan, Maaz; Yousaf, Muhammad; Qayyum, Amir; Malik, Shahzad

    Vertical handover decision algorithm that is based on user preferences and coupled with Media Independent Handover (MIH) local triggers have not been explored much in the literature. We have developed a comprehensive cross-layer solution, called Vertical Handover Decision (VHOD) approach, which consists of three parts viz. mechanism for collecting and storing user preferences, Vertical Handover Decision (VHOD) algorithm and the MIH Function (MIHF). MIHF triggers the VHOD algorithm which operates on user preferences to issue handover commands to mobility management protocol. VHOD algorithm is an MIH User and therefore needs to subscribe events and configure thresholds for receiving triggers from MIHF. In this regard, we have performed experiments in WLAN to suggest thresholds for Link Going Down trigger. We have also critically evaluated the handover decision process, proposed Just-in-time interface activation technique, compared our proposed approach with prominent user centric approaches and analyzed our approach from different aspects.

  11. GOClonto: an ontological clustering approach for conceptualizing PubMed abstracts.

    PubMed

    Zheng, Hai-Tao; Borchert, Charles; Kim, Hong-Gee

    2010-02-01

    Concurrent with progress in biomedical sciences, an overwhelming of textual knowledge is accumulating in the biomedical literature. PubMed is the most comprehensive database collecting and managing biomedical literature. To help researchers easily understand collections of PubMed abstracts, numerous clustering methods have been proposed to group similar abstracts based on their shared features. However, most of these methods do not explore the semantic relationships among groupings of documents, which could help better illuminate the groupings of PubMed abstracts. To address this issue, we proposed an ontological clustering method called GOClonto for conceptualizing PubMed abstracts. GOClonto uses latent semantic analysis (LSA) and gene ontology (GO) to identify key gene-related concepts and their relationships as well as allocate PubMed abstracts based on these key gene-related concepts. Based on two PubMed abstract collections, the experimental results show that GOClonto is able to identify key gene-related concepts and outperforms the STC (suffix tree clustering) algorithm, the Lingo algorithm, the Fuzzy Ants algorithm, and the clustering based TRS (tolerance rough set) algorithm. Moreover, the two ontologies generated by GOClonto show significant informative conceptual structures.

  12. HRSSA - Efficient hybrid stochastic simulation for spatially homogeneous biochemical reaction networks

    NASA Astrophysics Data System (ADS)

    Marchetti, Luca; Priami, Corrado; Thanh, Vo Hong

    2016-07-01

    This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.

  13. Clinically oriented device programming in bradycardia patients: part 2 (atrioventricular blocks and neurally mediated syncope). Proposals from AIAC (Italian Association of Arrhythmology and Cardiac Pacing).

    PubMed

    Palmisano, Pietro; Ziacchi, Matteo; Biffi, Mauro; Ricci, Renato P; Landolina, Maurizio; Zoni-Berisso, Massimo; Occhetta, Eraldo; Maglia, Giampiero; Botto, Gianluca; Padeletti, Luigi; Boriani, Giuseppe

    2018-04-01

    : The purpose of this two-part consensus document is to provide specific suggestions (based on an extensive literature review) on appropriate pacemaker setting in relation to patients' clinical features. In part 2, criteria for pacemaker choice and programming in atrioventricular blocks and neurally mediate syncope are proposed. The atrioventricular blocks can be paroxysmal or persistent, isolated or associated with sinus node disease. Neurally mediated syncope can be related to carotid sinus syndrome or cardioinhibitory vasovagal syncope. In sinus rhythm, with persistent atrioventricular block, we considered appropriate the activation of mode-switch algorithms, and algorithms for auto-adaptive management of the ventricular pacing output. If the atrioventricular block is paroxysmal, in addition to algorithms mentioned above, algorithms to maximize intrinsic atrioventricular conduction should be activated. When sinus node disease is associated with atrioventricular block, the activation of rate-responsive function in patients with chronotropic incompetence is appropriate. In permanent atrial fibrillation with atrioventricular block, algorithms for auto-adaptive management of the ventricular pacing output should be activated. If the atrioventricular block is persistent, the activation of rate-responsive function is appropriate. In carotid sinus syndrome, adequate rate hysteresis should be programmed. In vasovagal syncope, specialized sensing and pacing algorithms designed for reflex syncope prevention should be activated.

  14. Interactive algorithms for teaching and learning acute medicine in the network of medical faculties MEFANET.

    PubMed

    Schwarz, Daniel; Štourač, Petr; Komenda, Martin; Harazim, Hana; Kosinová, Martina; Gregor, Jakub; Hůlek, Richard; Smékalová, Olga; Křikava, Ivo; Štoudek, Roman; Dušek, Ladislav

    2013-07-08

    Medical Faculties Network (MEFANET) has established itself as the authority for setting standards for medical educators in the Czech Republic and Slovakia, 2 independent countries with similar languages that once comprised a federation and that still retain the same curricular structure for medical education. One of the basic goals of the network is to advance medical teaching and learning with the use of modern information and communication technologies. We present the education portal AKUTNE.CZ as an important part of the MEFANET's content. Our focus is primarily on simulation-based tools for teaching and learning acute medicine issues. Three fundamental elements of the MEFANET e-publishing system are described: (1) medical disciplines linker, (2) authentication/authorization framework, and (3) multidimensional quality assessment. A new set of tools for technology-enhanced learning have been introduced recently: Sandbox (works in progress), WikiLectures (collaborative content authoring), Moodle-MEFANET (central learning management system), and Serious Games (virtual casuistics and interactive algorithms). The latest development in MEFANET is designed for indexing metadata about simulation-based learning objects, also known as electronic virtual patients or virtual clinical cases. The simulations assume the form of interactive algorithms for teaching and learning acute medicine. An anonymous questionnaire of 10 items was used to explore students' attitudes and interests in using the interactive algorithms as part of their medical or health care studies. Data collection was conducted over 10 days in February 2013. In total, 25 interactive algorithms in the Czech and English languages have been developed and published on the AKUTNE.CZ education portal to allow the users to test and improve their knowledge and skills in the field of acute medicine. In the feedback survey, 62 participants completed the online questionnaire (13.5%) from the total 460 addressed. Positive attitudes toward the interactive algorithms outnumbered negative trends. The peer-reviewed algorithms were used for conducting problem-based learning sessions in general medicine (first aid, anesthesiology and pain management, emergency medicine) and in nursing (emergency medicine for midwives, obstetric analgesia, and anesthesia for midwifes). The feedback from the survey suggests that the students found the interactive algorithms as effective learning tools, facilitating enhanced knowledge in the field of acute medicine. The interactive algorithms, as a software platform, are open to academic use worldwide. The existing algorithms, in the form of simulation-based learning objects, can be incorporated into any educational website (subject to the approval of the authors).

  15. Interactive Algorithms for Teaching and Learning Acute Medicine in the Network of Medical Faculties MEFANET

    PubMed Central

    Štourač, Petr; Komenda, Martin; Harazim, Hana; Kosinová, Martina; Gregor, Jakub; Hůlek, Richard; Smékalová, Olga; Křikava, Ivo; Štoudek, Roman; Dušek, Ladislav

    2013-01-01

    Background Medical Faculties Network (MEFANET) has established itself as the authority for setting standards for medical educators in the Czech Republic and Slovakia, 2 independent countries with similar languages that once comprised a federation and that still retain the same curricular structure for medical education. One of the basic goals of the network is to advance medical teaching and learning with the use of modern information and communication technologies. Objective We present the education portal AKUTNE.CZ as an important part of the MEFANET’s content. Our focus is primarily on simulation-based tools for teaching and learning acute medicine issues. Methods Three fundamental elements of the MEFANET e-publishing system are described: (1) medical disciplines linker, (2) authentication/authorization framework, and (3) multidimensional quality assessment. A new set of tools for technology-enhanced learning have been introduced recently: Sandbox (works in progress), WikiLectures (collaborative content authoring), Moodle-MEFANET (central learning management system), and Serious Games (virtual casuistics and interactive algorithms). The latest development in MEFANET is designed for indexing metadata about simulation-based learning objects, also known as electronic virtual patients or virtual clinical cases. The simulations assume the form of interactive algorithms for teaching and learning acute medicine. An anonymous questionnaire of 10 items was used to explore students’ attitudes and interests in using the interactive algorithms as part of their medical or health care studies. Data collection was conducted over 10 days in February 2013. Results In total, 25 interactive algorithms in the Czech and English languages have been developed and published on the AKUTNE.CZ education portal to allow the users to test and improve their knowledge and skills in the field of acute medicine. In the feedback survey, 62 participants completed the online questionnaire (13.5%) from the total 460 addressed. Positive attitudes toward the interactive algorithms outnumbered negative trends. Conclusions The peer-reviewed algorithms were used for conducting problem-based learning sessions in general medicine (first aid, anesthesiology and pain management, emergency medicine) and in nursing (emergency medicine for midwives, obstetric analgesia, and anesthesia for midwifes). The feedback from the survey suggests that the students found the interactive algorithms as effective learning tools, facilitating enhanced knowledge in the field of acute medicine. The interactive algorithms, as a software platform, are open to academic use worldwide. The existing algorithms, in the form of simulation-based learning objects, can be incorporated into any educational website (subject to the approval of the authors). PMID:23835586

  16. Case finding with incomplete administrative data: observations on playing with less than a full deck.

    PubMed

    Holmes, Ann M; Ackermann, Ronald T; Katz, Barry P; Downs, Stephen M; Inui, Thomas S

    2010-12-01

    Capacity constraints and efficiency considerations require that disease management programs identify patients most likely to benefit from intervention. Predictive modeling with available administrative data has been used as a strategy to match patients with appropriate interventions. Administrative data, however, can be plagued by problems of incompleteness and delays in processing. In this article, we examine the effects of these problems on the effectiveness of using administrative data to identify suitable candidates for disease management, and we evaluate various proposed solutions. We build prospective models using regression analysis and evaluate the resulting stratification algorithms using R² statistics, areas under receiver operator characteristic curves, and cost concentration ratios. We find delays in receipt of data reduce the effectiveness of the stratification algorithm, but the degree of compromise depends on what proportion of the population is targeted for intervention. Surprisingly, we find that supplementing partial data with a longer panel of more outdated data produces algorithms that are inferior to algorithms based on a shorter window of more recent data. Demographic data add little to algorithms that include prior claims data, and are an inadequate substitute when claims data are unavailable. Supplementing demographic data with additional information on self-reported health status improves the stratification performance only slightly and only when disease management is targeted to the highest risk patients. We conclude that the extra costs associated with surveying patients for health status information or retrieving older claims data cannot be justified given the lack of evidence that either improves the effectiveness of the stratification algorithm.

  17. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT: EVALUATION OF THE XP-SWMM STORMWATER WASTEWATER MANAGEMENT MODEL, VERSION 8.2, 2000, FROM XP SOFTWARE, INC.

    EPA Science Inventory

    XP-SWMM is a commercial software package used throughout the United States and around the world for simulation of storm, sanitary and combined sewer systems. It was designed based on the EPA Storm Water Management Model (EPA SWMM), but has enhancements and additional algorithms f...

  18. Management of anaphylaxis in an austere or operational environment.

    PubMed

    Ellis, B Craig; Brown, Simon G A

    2014-01-01

    We present a case report of a Special Operations Soldier who developed anaphylaxis as a consequence of a bee sting, resulting in compromise of the operation. We review the current literature as it relates to the pathophysiology of the disease process, its diagnosis, and its management. An evidence-based field treatment algorithm is suggested. 2014.

  19. Artificial Intelligence-Based Models for the Optimal and Sustainable Use of Groundwater in Coastal Aquifers

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Datta, Bithin

    2011-07-01

    Overexploitation of the coastal aquifers results in saltwater intrusion. Once saltwater intrusion occurs, it involves huge cost and long-term remediation measures to remediate these contaminated aquifers. Hence, it is important to have strategies for the sustainable use of coastal aquifers. This study develops a methodology for the optimal management of saltwater intrusion prone aquifers. A linked simulation-optimization-based management strategy is developed. The methodology uses genetic-programming-based models for simulating the aquifer processes, which is then linked to a multi-objective genetic algorithm to obtain optimal management strategies in terms of groundwater extraction from potential well locations in the aquifer.

  20. High-performance sparse matrix-matrix products on Intel KNL and multicore architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagasaka, Y; Matsuoka, S; Azad, A

    Sparse matrix-matrix multiplication (SpGEMM) is a computational primitive that is widely used in areas ranging from traditional numerical applications to recent big data analysis and machine learning. Although many SpGEMM algorithms have been proposed, hardware specific optimizations for multi- and many-core processors are lacking and a detailed analysis of their performance under various use cases and matrices is not available. We firstly identify and mitigate multiple bottlenecks with memory management and thread scheduling on Intel Xeon Phi (Knights Landing or KNL). Specifically targeting multi- and many-core processors, we develop a hash-table-based algorithm and optimize a heap-based shared-memory SpGEMM algorithm. Wemore » examine their performance together with other publicly available codes. Different from the literature, our evaluation also includes use cases that are representative of real graph algorithms, such as multi-source breadth-first search or triangle counting. Our hash-table and heap-based algorithms are showing significant speedups from libraries in the majority of the cases while different algorithms dominate the other scenarios with different matrix size, sparsity, compression factor and operation type. We wrap up in-depth evaluation results and make a recipe to give the best SpGEMM algorithm for target scenario. A critical finding is that hash-table-based SpGEMM gets a significant performance boost if the nonzeros are not required to be sorted within each row of the output matrix.« less

  1. A new algorithm for agile satellite-based acquisition operations

    NASA Astrophysics Data System (ADS)

    Bunkheila, Federico; Ortore, Emiliano; Circi, Christian

    2016-06-01

    Taking advantage of the high manoeuvrability and the accurate pointing of the so-called agile satellites, an algorithm which allows efficient management of the operations concerning optical acquisitions is described. Fundamentally, this algorithm can be subdivided into two parts: in the first one the algorithm operates a geometric classification of the areas of interest and a partitioning of these areas into stripes which develop along the optimal scan directions; in the second one it computes the succession of the time windows in which the acquisition operations of the areas of interest are feasible, taking into consideration the potential restrictions associated with these operations and with the geometric and stereoscopic constraints. The results and the performances of the proposed algorithm have been determined and discussed considering the case of the Periodic Sun-Synchronous Orbits.

  2. A Brokering Protocol for Agent-Based Grid Resource Discovery

    NASA Astrophysics Data System (ADS)

    Kang, Jaeyong; Sim, Kwang Mong

    Resource discovery is one of the basic and key aspects in grid resource management, which aims at searching for the suitable resources for satisfying the requirement of users' applications. This paper introduces an agent-based brokering protocol which connects users and providers in grid environments. In particular, it focuses on addressing the problem of connecting users and providers. A connection algorithm that matches advertisements of users and requests from providers based on pre-specified multiple criteria is devised and implemented. The connection algorithm mainly consists of four stages: selection, evaluation, filtering, and recommendation. A series of experiments that were carried out in executing the protocol, and favorable results were obtained.

  3. Development of an algorithm for the management of cervical lymphadenopathy in children: consensus of the Italian Society of Preventive and Social Pediatrics, jointly with the Italian Society of Pediatric Infectious Diseases and the Italian Society of Pediatric Otorhinolaryngology.

    PubMed

    Chiappini, Elena; Camaioni, Angelo; Benazzo, Marco; Biondi, Andrea; Bottero, Sergio; De Masi, Salvatore; Di Mauro, Giuseppe; Doria, Mattia; Esposito, Susanna; Felisati, Giovanni; Felisati, Dino; Festini, Filippo; Gaini, Renato Maria; Galli, Luisa; Gambini, Claudio; Gianelli, Umberto; Landi, Massimo; Lucioni, Marco; Mansi, Nicola; Mazzantini, Rachele; Marchisio, Paola; Marseglia, Gian Luigi; Miniello, Vito Leonardo; Nicola, Marta; Novelli, Andrea; Paulli, Marco; Picca, Marina; Pillon, Marta; Pisani, Paolo; Pipolo, Carlotta; Principi, Nicola; Sardi, Iacopo; Succo, Giovanni; Tomà, Paolo; Tortoli, Enrico; Tucci, Filippo; Varricchio, Attilio; de Martino, Maurizio; Italian Guideline Panel For Management Of Cervical Lymphadenopathy In Children

    2015-01-01

    Cervical lymphadenopathy is a common disorder in children due to a wide spectrum of disorders. On the basis of a complete history and physical examination, paediatricians have to select, among the vast majority of children with a benign self-limiting condition, those at risk for other, more complex, diseases requiring laboratory tests, imaging and, finally, tissue sampling. At the same time, they should avoid expensive and invasive examinations when unnecessary. The Italian Society of Preventive and Social Pediatrics, jointly with the Italian Society of Pediatric Infectious Diseases, the Italian Society of Pediatric Otorhinolaryngology, and other Scientific Societies, issued a National Consensus document, based on the most recent literature findings, including an algorithm for the management of cervical lymphadenopathy in children. The Consensus Conference method was used, following the Italian National Plan Guidelines. Relevant publications in English were identified through a systematic review of MEDLINE and the Cochrane Database of Systematic Reviews from their inception through March 21, 2014. Basing on literature results, an algorithm was developed, including several possible clinical scenarios. Situations requiring a watchful waiting strategy, those requiring an empiric antibiotic therapy, and those necessitating a prompt diagnostic workup, considering the risk for a severe underling disease, have been identified. The present algorithm is a practice tool for the management of pediatric cervical lymphadenopathy in the hospital and the ambulatory settings. A multidisciplinary approach is paramount. Further studies are required for its validation in the clinical field.

  4. ATAMM enhancement and multiprocessing performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.

    1994-01-01

    The algorithm to architecture mapping model (ATAAM) is a Petri net based model which provides a strategy for periodic execution of a class of real-time algorithms on multicomputer dataflow architecture. The execution of large-grained, decision-free algorithms on homogeneous processing elements is studied. The ATAAM provides an analytical basis for calculating performance bounds on throughput characteristics. Extension of the ATAMM as a strategy for cyclo-static scheduling provides for a truly distributed ATAMM multicomputer operating system. An ATAAM testbed consisting of a centralized graph manager and three processors is described using embedded firmware on 68HC11 microcontrollers.

  5. Smart sensing to drive real-time loads scheduling algorithm in a domotic architecture

    NASA Astrophysics Data System (ADS)

    Santamaria, Amilcare Francesco; Raimondo, Pierfrancesco; De Rango, Floriano; Vaccaro, Andrea

    2014-05-01

    Nowadays the focus on power consumption represent a very important factor regarding the reduction of power consumption with correlated costs and the environmental sustainability problems. Automatic control load based on power consumption and use cycle represents the optimal solution to costs restraint. The purpose of these systems is to modulate the power request of electricity avoiding an unorganized work of the loads, using intelligent techniques to manage them based on real time scheduling algorithms. The goal is to coordinate a set of electrical loads to optimize energy costs and consumptions based on the stipulated contract terms. The proposed algorithm use two new main notions: priority driven loads and smart scheduling loads. The priority driven loads can be turned off (stand by) according to a priority policy established by the user if the consumption exceed a defined threshold, on the contrary smart scheduling loads are scheduled in a particular way to don't stop their Life Cycle (LC) safeguarding the devices functions or allowing the user to freely use the devices without the risk of exceeding the power threshold. The algorithm, using these two kind of notions and taking into account user requirements, manages loads activation and deactivation allowing the completion their operation cycle without exceeding the consumption threshold in an off-peak time range according to the electricity fare. This kind of logic is inspired by industrial lean manufacturing which focus is to minimize any kind of power waste optimizing the available resources.

  6. Open abdomen management: A review of its history and a proposed management algorithm

    PubMed Central

    Kreis, Barbara Elize; de Mol van Otterloo, Johan Coenraad Alexander; Kreis, Robert Walter

    2013-01-01

    In this review we look into the historical development of open abdomen management. Its indication has spread in 70 years from intra-abdominal sepsis to damage control surgery and abdominal compartment syndrome. Different temporary abdominal closure techniques are essential to benefit the potential advantages of open abdomen management. Here, we discuss the different techniques and provide a new treatment strategy, based on available evidence, to facilitate more consistent decision making and further research on this complicated surgical topic. PMID:23823991

  7. The Texas Children's Medication Algorithm Project: Report of the Texas Consensus Conference Panel on Medication Treatment of Childhood Attention-Deficit/Hyperactivity Disorder. Part II: Tactics. Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Pliszka, S R; Greenhill, L L; Crismon, M L; Sedillo, A; Carlson, C; Conners, C K; McCracken, J T; Swanson, J M; Hughes, C W; Llana, M E; Lopez, M; Toprac, M G

    2000-07-01

    Expert consensus methodology was used to develop a medication treatment algorithm for attention-deficit/hyperactivity disorder (ADHD). The algorithm broadly outlined the choice of medication for ADHD and some of its most common comorbid conditions. Specific tactical recommendations were developed with regard to medication dosage, assessment of drug response, management of side effects, and long-term medication management. The consensus conference of academic clinicians and researchers, practicing clinicians, administrators, consumers, and families developed evidence-based tactics for the pharmacotherapy of childhood ADHD and its common comorbid disorders. The panel discussed specifics of treatment of ADHD and its comorbid conditions with stimulants, antidepressants, mood stabilizers, alpha-agonists, and (when appropriate) antipsychotics. Specific tactics for the use of each of the above agents are outlined. The tactics are designed to be practical for implementation in the public mental health sector, but they may have utility in many practice settings, including the private practice environment. Tactics for psychopharmacological management of ADHD can be developed with consensus.

  8. Land use mapping from CBERS-2 images with open source tools by applying different classification algorithms

    NASA Astrophysics Data System (ADS)

    Sanhouse-García, Antonio J.; Rangel-Peraza, Jesús Gabriel; Bustos-Terrones, Yaneth; García-Ferrer, Alfonso; Mesas-Carrascosa, Francisco J.

    2016-02-01

    Land cover classification is often based on different characteristics between their classes, but with great homogeneity within each one of them. This cover is obtained through field work or by mean of processing satellite images. Field work involves high costs; therefore, digital image processing techniques have become an important alternative to perform this task. However, in some developing countries and particularly in Casacoima municipality in Venezuela, there is a lack of geographic information systems due to the lack of updated information and high costs in software license acquisition. This research proposes a low cost methodology to develop thematic mapping of local land use and types of coverage in areas with scarce resources. Thematic mapping was developed from CBERS-2 images and spatial information available on the network using open source tools. The supervised classification method per pixel and per region was applied using different classification algorithms and comparing them among themselves. Classification method per pixel was based on Maxver algorithms (maximum likelihood) and Euclidean distance (minimum distance), while per region classification was based on the Bhattacharya algorithm. Satisfactory results were obtained from per region classification, where overall reliability of 83.93% and kappa index of 0.81% were observed. Maxver algorithm showed a reliability value of 73.36% and kappa index 0.69%, while Euclidean distance obtained values of 67.17% and 0.61% for reliability and kappa index, respectively. It was demonstrated that the proposed methodology was very useful in cartographic processing and updating, which in turn serve as a support to develop management plans and land management. Hence, open source tools showed to be an economically viable alternative not only for forestry organizations, but for the general public, allowing them to develop projects in economically depressed and/or environmentally threatened areas.

  9. SHAMROCK: A Synthesizable High Assurance Cryptography and Key Management Coprocessor

    DTIC Science & Technology

    2016-11-01

    and excluding devices from a communicating group as they become trusted, or untrusted. An example of using rekeying to dynamically adjust group...algorithms, such as the Elliptic Curve Digital Signature Algorithm (ECDSA), work by computing a cryptographic hash of a message using, for example , the...material is based upon work supported by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8721- 05-C

  10. Activity Recognition for Personal Time Management

    NASA Astrophysics Data System (ADS)

    Prekopcsák, Zoltán; Soha, Sugárka; Henk, Tamás; Gáspár-Papanek, Csaba

    We describe an accelerometer based activity recognition system for mobile phones with a special focus on personal time management. We compare several data mining algorithms for the automatic recognition task in the case of single user and multiuser scenario, and improve accuracy with heuristics and advanced data mining methods. The results show that daily activities can be recognized with high accuracy and the integration with the RescueTime software can give good insights for personal time management.

  11. Knowledge Based Engineering for Spatial Database Management and Use

    NASA Technical Reports Server (NTRS)

    Peuquet, D. (Principal Investigator)

    1984-01-01

    The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.

  12. A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Munoz, Cesar

    2015-01-01

    In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.

  13. Prosthetic joint infection development of an evidence-based diagnostic algorithm.

    PubMed

    Mühlhofer, Heinrich M L; Pohlig, Florian; Kanz, Karl-Georg; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; Kelch, Sarah; Harrasser, Norbert; von Eisenhart-Rothe, Rüdiger; Schauwecker, Johannes

    2017-03-09

    Increasing rates of prosthetic joint infection (PJI) have presented challenges for general practitioners, orthopedic surgeons and the health care system in the recent years. The diagnosis of PJI is complex; multiple diagnostic tools are used in the attempt to correctly diagnose PJI. Evidence-based algorithms can help to identify PJI using standardized diagnostic steps. We reviewed relevant publications between 1990 and 2015 using a systematic literature search in MEDLINE and PUBMED. The selected search results were then classified into levels of evidence. The keywords were prosthetic joint infection, biofilm, diagnosis, sonication, antibiotic treatment, implant-associated infection, Staph. aureus, rifampicin, implant retention, pcr, maldi-tof, serology, synovial fluid, c-reactive protein level, total hip arthroplasty (THA), total knee arthroplasty (TKA) and combinations of these terms. From an initial 768 publications, 156 publications were stringently reviewed. Publications with class I-III recommendations (EAST) were considered. We developed an algorithm for the diagnostic approach to display the complex diagnosis of PJI in a clear and logically structured process according to ISO 5807. The evidence-based standardized algorithm combines modern clinical requirements and evidence-based treatment principles. The algorithm provides a detailed transparent standard operating procedure (SOP) for diagnosing PJI. Thus, consistently high, examiner-independent process quality is assured to meet the demands of modern quality management in PJI diagnosis.

  14. Integrated Traffic Flow Management Decision Making

    NASA Technical Reports Server (NTRS)

    Grabbe, Shon R.; Sridhar, Banavar; Mukherjee, Avijit

    2009-01-01

    A generalized approach is proposed to support integrated traffic flow management decision making studies at both the U.S. national and regional levels. It can consider tradeoffs between alternative optimization and heuristic based models, strategic versus tactical flight controls, and system versus fleet preferences. Preliminary testing was accomplished by implementing thirteen unique traffic flow management models, which included all of the key components of the system and conducting 85, six-hour fast-time simulation experiments. These experiments considered variations in the strategic planning look-ahead times, the replanning intervals, and the types of traffic flow management control strategies. Initial testing indicates that longer strategic planning look-ahead times and re-planning intervals result in steadily decreasing levels of sector congestion for a fixed delay level. This applies when accurate estimates of the air traffic demand, airport capacities and airspace capacities are available. In general, the distribution of the delays amongst the users was found to be most equitable when scheduling flights using a heuristic scheduling algorithm, such as ration-by-distance. On the other hand, equity was the worst when using scheduling algorithms that took into account the number of seats aboard each flight. Though the scheduling algorithms were effective at alleviating sector congestion, the tactical rerouting algorithm was the primary control for avoiding en route weather hazards. Finally, the modeled levels of sector congestion, the number of weather incursions, and the total system delays, were found to be in fair agreement with the values that were operationally observed on both good and bad weather days.

  15. Managed traffic evacuation using distributed sensor processing

    NASA Astrophysics Data System (ADS)

    Ramuhalli, Pradeep; Biswas, Subir

    2005-05-01

    This paper presents an integrated sensor network and distributed event processing architecture for managed in-building traffic evacuation during natural and human-caused disasters, including earthquakes, fire and biological/chemical terrorist attacks. The proposed wireless sensor network protocols and distributed event processing mechanisms offer a new distributed paradigm for improving reliability in building evacuation and disaster management. The networking component of the system is constructed using distributed wireless sensors for measuring environmental parameters such as temperature, humidity, and detecting unusual events such as smoke, structural failures, vibration, biological/chemical or nuclear agents. Distributed event processing algorithms will be executed by these sensor nodes to detect the propagation pattern of the disaster and to measure the concentration and activity of human traffic in different parts of the building. Based on this information, dynamic evacuation decisions are taken for maximizing the evacuation speed and minimizing unwanted incidents such as human exposure to harmful agents and stampedes near exits. A set of audio-visual indicators and actuators are used for aiding the automated evacuation process. In this paper we develop integrated protocols, algorithms and their simulation models for the proposed sensor networking and the distributed event processing framework. Also, efficient harnessing of the individually low, but collectively massive, processing abilities of the sensor nodes is a powerful concept behind our proposed distributed event processing algorithms. Results obtained through simulation in this paper are used for a detailed characterization of the proposed evacuation management system and its associated algorithmic components.

  16. Least mean square fourth based microgrid state estimation algorithm using the internet of things technology.

    PubMed

    Rana, Md Masud

    2017-01-01

    This paper proposes an innovative internet of things (IoT) based communication framework for monitoring microgrid under the condition of packet dropouts in measurements. First of all, the microgrid incorporating the renewable distributed energy resources is represented by a state-space model. The IoT embedded wireless sensor network is adopted to sense the system states. Afterwards, the information is transmitted to the energy management system using the communication network. Finally, the least mean square fourth algorithm is explored for estimating the system states. The effectiveness of the developed approach is verified through numerical simulations.

  17. Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms Based on Kalman Filter Estimation

    NASA Technical Reports Server (NTRS)

    Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.

  18. Architecting Integrated System Health Management for Airworthiness

    DTIC Science & Technology

    2013-09-01

    aircraft safety and reliability through condition-based maintenance [Miller et al., 1991]. With the same motivation, Integrated System Health Management...diagnostics and prognostics algorithms. 2.2.2 Health and Usage Monitoring System (HUMS) in Helicopters Increased demand for improved operational safety ...offshore shuttle helicopters traversing the petrol installations in the North Sea, and increased demand for improved operational safety and reduced

  19. Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks

    PubMed Central

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789

  20. Simple random sampling-based probe station selection for fault detection in wireless sensor networks.

    PubMed

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate.

  1. A Distributed Wireless Camera System for the Management of Parking Spaces

    PubMed Central

    Melničuk, Petr

    2017-01-01

    The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces. PMID:29283371

  2. HRSSA – Efficient hybrid stochastic simulation for spatially homogeneous biochemical reaction networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchetti, Luca, E-mail: marchetti@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; University of Trento, Department of Mathematics

    This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance andmore » accuracy of HRSSA against other state of the art algorithms.« less

  3. Is It Ethical for Patents to Be Issued for the Computer Algorithms that Affect Course Management Systems for Distance Learning?

    ERIC Educational Resources Information Center

    Moreau, Nancy

    2008-01-01

    This article discusses the impact of patents for computer algorithms in course management systems. Referring to historical documents and court cases, the positive and negative aspects of software patents are presented. The key argument is the accessibility to algorithms comprising a course management software program such as Blackboard. The…

  4. Development and implementation of clinical algorithms in occupational health practice.

    PubMed

    Ghafur, Imran; Lalloo, Drushca; Macdonald, Ewan B; Menon, Manju

    2013-12-01

    Occupational health (OH) practice is framed by legal, ethical, and regulatory requirements. Integrating this information into daily practice can be a difficult task. We devised evidence-based framework standards of good practice that would aid clinical management, and assessed their impact. The clinical algorithm was the method deemed most appropriate to our needs. Using "the first OH consultation" as an example, the development, implementation, and evaluation of an algorithm is described. The first OH consultation algorithm was developed. Evaluation demonstrated an overall improvement in recording of information, specifically consent, recreational drug history, function, and review arrangements. Clinical algorithms can be a method for assimilating and succinctly presenting the various facets of OH practice, for use by all OH clinicians as a practical guide and as a way of improving quality in clinical record-keeping.

  5. Study on Adaptive Parameter Determination of Cluster Analysis in Urban Management Cases

    NASA Astrophysics Data System (ADS)

    Fu, J. Y.; Jing, C. F.; Du, M. Y.; Fu, Y. L.; Dai, P. P.

    2017-09-01

    The fine management for cities is the important way to realize the smart city. The data mining which uses spatial clustering analysis for urban management cases can be used in the evaluation of urban public facilities deployment, and support the policy decisions, and also provides technical support for the fine management of the city. Aiming at the problem that DBSCAN algorithm which is based on the density-clustering can not realize parameter adaptive determination, this paper proposed the optimizing method of parameter adaptive determination based on the spatial analysis. Firstly, making analysis of the function Ripley's K for the data set to realize adaptive determination of global parameter MinPts, which means setting the maximum aggregation scale as the range of data clustering. Calculating every point object's highest frequency K value in the range of Eps which uses K-D tree and setting it as the value of clustering density to realize the adaptive determination of global parameter MinPts. Then, the R language was used to optimize the above process to accomplish the precise clustering of typical urban management cases. The experimental results based on the typical case of urban management in XiCheng district of Beijing shows that: The new DBSCAN clustering algorithm this paper presents takes full account of the data's spatial and statistical characteristic which has obvious clustering feature, and has a better applicability and high quality. The results of the study are not only helpful for the formulation of urban management policies and the allocation of urban management supervisors in XiCheng District of Beijing, but also to other cities and related fields.

  6. [Clinical practice guidelines and knowledge management in healthcare].

    PubMed

    Ollenschläger, Günter

    2013-10-01

    Clinical practice guidelines are key tools for the translation of scientific evidence into everyday patient care. Therefore guidelines can act as cornerstones of evidence based knowledge management in healthcare, if they are trustworthy, and its recommendations are not biased by authors' conflict of interests. Good medical guidelines should be disseminated by means of virtual (digital/electronic) health libraries - together with implementation tools in context, such as guideline based algorithms, check lists, patient information, a.s.f. The article presents evidence based medical knowledge management using the German experiences as an example. It discusses future steps establishing evidence based health care by means of combining patient data, evidence from medical science and patient care routine, together with feedback systems for healthcare providers.

  7. Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Fu, Yuli; Yang, Junjie

    2016-07-01

    Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.

  8. Reaching consensus on the physiotherapeutic management of patients following upper abdominal surgery: a pragmatic approach to interpret equivocal evidence.

    PubMed

    Hanekom, Susan D; Brooks, Dina; Denehy, Linda; Fagevik-Olsén, Monika; Hardcastle, Timothy C; Manie, Shamila; Louw, Quinette

    2012-02-06

    Postoperative pulmonary complications remain the most significant cause of morbidity following open upper abdominal surgery despite advances in perioperative care. However, due to the poor quality primary research uncertainty surrounding the value of prophylactic physiotherapy intervention in the management of patients following abdominal surgery persists. The Delphi process has been proposed as a pragmatic methodology to guide clinical practice when evidence is equivocal. The objective was to develop a clinical management algorithm for the post operative management of abdominal surgery patients. Eleven draft algorithm statements extracted from the extant literature by the primary research team were verified and rated by scientist clinicians (n=5) in an electronic three round Delphi process. Algorithm statements which reached a priori defined consensus-semi-interquartile range (SIQR)<0.5-were collated into the algorithm. The five panelists allocated to the abdominal surgery Delphi panel were from Australia, Canada, Sweden, and South Africa. The 11 draft algorithm statements were edited and 5 additional statements were formulated. The panel reached consensus on the rating of all statements. Four statements were rated essential. An expert Delphi panel interpreted the equivocal evidence for the physiotherapeutic management of patients following upper abdominal surgery. Through a process of consensus a clinical management algorithm was formulated. This algorithm can now be used by clinicians to guide clinical practice in this population.

  9. Reduced Transfusion During OLT by POC Coagulation Management and TEG Functional Fibrinogen: A Retrospective Observational Study.

    PubMed

    De Pietri, Lesley; Ragusa, Francesca; Deleuterio, Annalisa; Begliomini, Bruno; Serra, Valentina

    2016-01-01

    Patients undergoing orthotopic liver transplantation are at high risk of bleeding complications. Several Authors have shown that thromboelastography (TEG)-based coagulation management and the administration of fibrinogen concentrate reduce the need for blood transfusion. We conducted a single-center, retrospective cohort observational study (Modena Polyclinic, Italy) on 386 consecutive patients undergoing liver transplantation. We assessed the impact on resource consumption and patient survival after the introduction of a new TEG-based transfusion algorithm, requiring also the introduction of the fibrinogen functional thromboelastography test and a maximum amplitude of functional fibrinogen thromboelastography transfusion cutoff (7 mm) to direct in administering fibrinogen (2012-2014, n = 118) compared with a purely TEG-based algorithm previously used (2005-2011, n = 268). After 2012, there was a significant decrease in the use of homologous blood (1502 ± 1376 vs 794 ± 717 mL, P < 0.001), fresh frozen plasma (537 ± 798 vs 98 ± 375 mL, P < 0.001), and platelets (158 ± 280 vs 75 ± 148 mL, P < 0.005), whereas the use of fibrinogen increased (0.1 ± 0.5 vs 1.4 ± 1.8 g, P < 0.001). There were no significant differences in 30-day and 6-month survival between the 2 groups. The implementation of a new coagulation management method featuring the addition of the fibrinogen functional thromboelastography test to the TEG test according to an algorithm which provides for the administration of fibrinogen has helped in reducing the need for transfusion in patients undergoing liver transplantation with no impact on their survival.

  10. The Scatter Search Based Algorithm to Revenue Management Problem in Broadcasting Companies

    NASA Astrophysics Data System (ADS)

    Pishdad, Arezoo; Sharifyazdi, Mehdi; Karimpour, Reza

    2009-09-01

    The problem under question in this paper which is faced by broadcasting companies is how to benefit from a limited advertising space. This problem is due to the stochastic behavior of customers (advertiser) in different fare classes. To address this issue we propose a mathematical constrained nonlinear multi period model which incorporates cancellation and overbooking. The objective function is to maximize the total expected revenue and our numerical method performs it by determining the sales limits for each class of customer to present the revenue management control policy. Scheduling the advertising spots in breaks is another area of concern and we consider it as a constraint in our model. In this paper an algorithm based on Scatter search is developed to acquire a good feasible solution. This method uses simulation over customer arrival and in a continuous finite time horizon [0, T]. Several sensitivity analyses are conducted in computational result for depicting the effectiveness of proposed method. It also provides insight into better results of considering revenue management (control policy) compared to "no sales limit" policy in which sooner demand will served first.

  11. QoE collaborative evaluation method based on fuzzy clustering heuristic algorithm.

    PubMed

    Bao, Ying; Lei, Weimin; Zhang, Wei; Zhan, Yuzhuo

    2016-01-01

    At present, to realize or improve the quality of experience (QoE) is a major goal for network media transmission service, and QoE evaluation is the basis for adjusting the transmission control mechanism. Therefore, a kind of QoE collaborative evaluation method based on fuzzy clustering heuristic algorithm is proposed in this paper, which is concentrated on service score calculation at the server side. The server side collects network transmission quality of service (QoS) parameter, node location data, and user expectation value from client feedback information. Then it manages the historical data in database through the "big data" process mode, and predicts user score according to heuristic rules. On this basis, it completes fuzzy clustering analysis, and generates service QoE score and management message, which will be finally fed back to clients. Besides, this paper mainly discussed service evaluation generative rules, heuristic evaluation rules and fuzzy clustering analysis methods, and presents service-based QoE evaluation processes. The simulation experiments have verified the effectiveness of QoE collaborative evaluation method based on fuzzy clustering heuristic rules.

  12. Supporting reputation based trust management enhancing security layer for cloud service models

    NASA Astrophysics Data System (ADS)

    Karthiga, R.; Vanitha, M.; Sumaiya Thaseen, I.; Mangaiyarkarasi, R.

    2017-11-01

    In the existing system trust between cloud providers and consumers is inadequate to establish the service level agreement though the consumer’s response is good cause to assess the overall reliability of cloud services. Investigators recognized the significance of trust can be managed and security can be provided based on feedback collected from participant. In this work a face recognition system that helps to identify the user effectively. So we use an image comparison algorithm where the user face is captured during registration time and get stored in database. With that original image we compare it with the sample image that is already stored in database. If both the image get matched then the users are identified effectively. When the confidential data are subcontracted to the cloud, data holders will become worried about the confidentiality of their data in the cloud. Encrypting the data before subcontracting has been regarded as the important resources of keeping user data privacy beside the cloud server. So in order to keep the data secure we use an AES algorithm. Symmetric-key algorithms practice a shared key concept, keeping data secret requires keeping this key secret. So only the user with private key can decrypt data.

  13. Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project

    NASA Astrophysics Data System (ADS)

    Anisimov, Vladimir; Anisimov, Evgeniy; Chernysh, Anatoliy

    2018-03-01

    In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.

  14. Use of Management Pathways or Algorithms in Children With Chronic Cough: Systematic Reviews.

    PubMed

    Chang, Anne B; Oppenheimer, John J; Weinberger, Miles; Weir, Kelly; Rubin, Bruce K; Irwin, Richard S

    2016-01-01

    Use of appropriate cough pathways or algorithms may reduce the morbidity of chronic cough, lead to earlier diagnosis of chronic underlying illness, and reduce unnecessary costs and medications. We undertook three systematic reviews to examine three related key questions (KQ): In children aged ?14 years with chronic cough (> 4 weeks' duration), KQ1, do cough management protocols (or algorithms) improve clinical outcomes? KQ2, should the cough management or testing algorithm differ depending on the duration and/or severity? KQ3, should the cough management or testing algorithm differ depending on the associated characteristics of the cough and clinical history? We used the CHEST expert cough panel's protocol. Two authors screened searches and selected and extracted data. Only systematic reviews, randomized controlled trials (RCTs), and cohort studies published in English were included. Data were presented in Preferred Reporting Items for Systematic Reviews and Meta-analyses flowcharts and summary tabulated. Nine studies were included in KQ1 (RCT = 1; cohort studies = 7) and eight in KQ3 (RCT = 2; cohort = 6), but none in KQ2. There is high-quality evidence that in children aged ?14 years with chronic cough (> 4 weeks' duration), the use of cough management protocols (or algorithms) improves clinical outcomes and cough management or the testing algorithm should differ depending on the associated characteristics of the cough and clinical history. It remains uncertain whether the management or testing algorithm should depend on the duration or severity of chronic cough. Pending new data, chronic cough in children should be defined as > 4 weeks' duration and children should be systematically evaluated with treatment targeted to the underlying cause irrespective of the cough severity. Copyright © 2016 American College of Chest Physicians. All rights reserved.

  15. Theory and experiments in model-based space system anomaly management

    NASA Astrophysics Data System (ADS)

    Kitts, Christopher Adam

    This research program consists of an experimental study of model-based reasoning methods for detecting, diagnosing and resolving anomalies that occur when operating a comprehensive space system. Using a first principles approach, several extensions were made to the existing field of model-based fault detection and diagnosis in order to develop a general theory of model-based anomaly management. Based on this theory, a suite of algorithms were developed and computationally implemented in order to detect, diagnose and identify resolutions for anomalous conditions occurring within an engineering system. The theory and software suite were experimentally verified and validated in the context of a simple but comprehensive, student-developed, end-to-end space system, which was developed specifically to support such demonstrations. This space system consisted of the Sapphire microsatellite which was launched in 2001, several geographically distributed and Internet-enabled communication ground stations, and a centralized mission control complex located in the Space Technology Center in the NASA Ames Research Park. Results of both ground-based and on-board experiments demonstrate the speed, accuracy, and value of the algorithms compared to human operators, and they highlight future improvements required to mature this technology.

  16. System impairment compensation in coherent optical communications by using a bio-inspired detector based on artificial neural network and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Danshi; Zhang, Min; Li, Ze; Song, Chuang; Fu, Meixia; Li, Jin; Chen, Xue

    2017-09-01

    A bio-inspired detector based on the artificial neural network (ANN) and genetic algorithm is proposed in the context of a coherent optical transmission system. The ANN is designed to mitigate 16-quadrature amplitude modulation system impairments, including linear impairment: Gaussian white noise, laser phase noise, in-phase/quadrature component imbalance, and nonlinear impairment: nonlinear phase. Without prior information or heuristic assumptions, the ANN, functioning as a machine learning algorithm, can learn and capture the characteristics of impairments from observed data. Numerical simulations were performed, and dispersion-shifted, dispersion-managed, and dispersion-unmanaged fiber links were investigated. The launch power dynamic range and maximum transmission distance for the bio-inspired method were 2.7 dBm and 240 km greater, respectively, than those of the maximum likelihood estimation algorithm. Moreover, the linewidth tolerance of the bio-inspired technique was 170 kHz greater than that of the k-means method, demonstrating its usability for digital signal processing in coherent systems.

  17. TinyOS-based quality of service management in wireless sensor networks

    USGS Publications Warehouse

    Peterson, N.; Anusuya-Rangappa, L.; Shirazi, B.A.; Huang, R.; Song, W.-Z.; Miceli, M.; McBride, D.; Hurson, A.; LaHusen, R.

    2009-01-01

    Previously the cost and extremely limited capabilities of sensors prohibited Quality of Service (QoS) implementations in wireless sensor networks. With advances in technology, sensors are becoming significantly less expensive and the increases in computational and storage capabilities are opening the door for new, sophisticated algorithms to be implemented. Newer sensor network applications require higher data rates with more stringent priority requirements. We introduce a dynamic scheduling algorithm to improve bandwidth for high priority data in sensor networks, called Tiny-DWFQ. Our Tiny-Dynamic Weighted Fair Queuing scheduling algorithm allows for dynamic QoS for prioritized communications by continually adjusting the treatment of communication packages according to their priorities and the current level of network congestion. For performance evaluation, we tested Tiny-DWFQ, Tiny-WFQ (traditional WFQ algorithm implemented in TinyOS), and FIFO queues on an Imote2-based wireless sensor network and report their throughput and packet loss. Our results show that Tiny-DWFQ performs better in all test cases. ?? 2009 IEEE.

  18. Subdural Fluid Collection and Hydrocephalus After Foramen Magnum Decompression for Chiari Malformation Type I: Management Algorithm of a Rare Complication.

    PubMed

    Rossini, Zefferino; Milani, Davide; Costa, Francesco; Castellani, Carlotta; Lasio, Giovanni; Fornari, Maurizio

    2017-10-01

    Chiari malformation type I is a hindbrain abnormality characterized by descent of the cerebellar tonsils beneath the foramen magnum, frequently associated with symptoms or brainstem compression, impaired cerebrospinal fluid circulation, and syringomyelia. Foramen magnum decompression represents the most common way of treatment. Rarely, subdural fluid collection and hydrocephalus represent postoperative adverse events. The treatment of this complication is still debated, and physicians are sometimes uncertain when to perform diversion surgery and when to perform more conservative management. We report an unusual occurrence of subdural fluid collection and hydrocephalus that developed in a 23-year-old patient after foramen magnum decompression for Chiari malformation type I. Following a management protocol, based on a step-by-step approach, from conservative therapy to diversion surgery, the patient was managed with urgent external ventricular drainage, and then with conservative management and wound revision. Because of the rarity of this adverse event, previous case reports differ about the form of treatment. In future cases, finding clinical and radiologic features to identify risk factors that are useful in predicting if the patient will benefit from conservative management or will need to undergo diversion surgery is only possible if a uniform form of treatment is used. Therefore, we believe that a management algorithm based on a step-by-step approach will reduce the use of invasive therapies and help to create a standard of care. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. System Performance of an Integrated Airborne Spacing Algorithm with Ground Automation

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.

    2016-01-01

    The National Aeronautics and Space Administration's (NASA's) first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the Terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools to enable precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise spacing behind another aircraft. Recent simulations and IM algorithm development at NASA have focused on trajectory-based IM operations where aircraft equipped with IM avionics are expected to achieve a spacing goal, assigned by air traffic controllers, at the final approach fix. The recently published IM Minimum Operational Performance Standards describe five types of IM operations. This paper discusses the results and conclusions of a human-in-the-loop simulation that investigated three of those IM operations. The results presented in this paper focus on system performance and integration metrics. Overall, the IM operations conducted in this simulation integrated well with ground-based decisions support tools and certain types of IM operational were able to provide improved spacing precision at the final approach fix; however, some issues were identified that should be addressed prior to implementing IM procedures into real-world operations.

  20. Routing design and fleet allocation optimization of freeway service patrol: Improved results using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Xiuqiao; Wang, Jian

    2018-07-01

    Freeway service patrol (FSP), is considered to be an effective method for incident management and can help transportation agency decision-makers alter existing route coverage and fleet allocation. This paper investigates the FSP problem of patrol routing design and fleet allocation, with the objective of minimizing the overall average incident response time. While the simulated annealing (SA) algorithm and its improvements have been applied to solve this problem, they often become trapped in local optimal solution. Moreover, the issue of searching efficiency remains to be further addressed. In this paper, we employ the genetic algorithm (GA) and SA to solve the FSP problem. To maintain population diversity and avoid premature convergence, niche strategy is incorporated into the traditional genetic algorithm. We also employ elitist strategy to speed up the convergence. Numerical experiments have been conducted with the help of the Sioux Falls network. Results show that the GA slightly outperforms the dual-based greedy (DBG) algorithm, the very large-scale neighborhood searching (VLNS) algorithm, the SA algorithm and the scenario algorithm.

  1. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    PubMed

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.

  2. Fuel management optimization using genetic algorithms and code independence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1994-12-31

    Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less

  3. Using Natural Language to Enable Mission Managers to Control Multiple Heterogeneous UAVs

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Puig-Navarro, Javier; Mehdi, S. Bilal; Mcquarry, A. Kyle

    2016-01-01

    The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for commercial activities. This research is developing methods beyond classical control-stick pilot inputs, to allow operators to manage complex missions without in-depth vehicle expertise. These missions may entail several heterogeneous UAVs flying coordinated patterns or flying multiple trajectories deconflicted in time or space to predefined locations. This paper describes the functionality and preliminary usability measures of an interface that allows an operator to define a mission using speech inputs. With a defined and simple vocabulary, operators can input the vast majority of mission parameters using simple, intuitive voice commands. Although the operator interface is simple, it is based upon autonomous algorithms that allow the mission to proceed with minimal input from the operator. This paper also describes these underlying algorithms that allow an operator to manage several UAVs.

  4. Video fingerprinting for copy identification: from research to industry applications

    NASA Astrophysics Data System (ADS)

    Lu, Jian

    2009-02-01

    Research that began a decade ago in video copy detection has developed into a technology known as "video fingerprinting". Today, video fingerprinting is an essential and enabling tool adopted by the industry for video content identification and management in online video distribution. This paper provides a comprehensive review of video fingerprinting technology and its applications in identifying, tracking, and managing copyrighted content on the Internet. The review includes a survey on video fingerprinting algorithms and some fundamental design considerations, such as robustness, discriminability, and compactness. It also discusses fingerprint matching algorithms, including complexity analysis, and approximation and optimization for fast fingerprint matching. On the application side, it provides an overview of a number of industry-driven applications that rely on video fingerprinting. Examples are given based on real-world systems and workflows to demonstrate applications in detecting and managing copyrighted content, and in monitoring and tracking video distribution on the Internet.

  5. Dynamic Online Bandwidth Adjustment Scheme Based on Kalai-Smorodinsky Bargaining Solution

    NASA Astrophysics Data System (ADS)

    Kim, Sungwook

    Virtual Private Network (VPN) is a cost effective method to provide integrated multimedia services. Usually heterogeneous multimedia data can be categorized into different types according to the required Quality of Service (QoS). Therefore, VPN should support the prioritization among different services. In order to support multiple types of services with different QoS requirements, efficient bandwidth management algorithms are important issues. In this paper, I employ the Kalai-Smorodinsky Bargaining Solution (KSBS) for the development of an adaptive bandwidth adjustment algorithm. In addition, to effectively manage the bandwidth in VPNs, the proposed control paradigm is realized in a dynamic online approach, which is practical for real network operations. The simulations show that the proposed scheme can significantly improve the system performances.

  6. Toward interactive scheduling systems for managing medical resources.

    PubMed

    Oddi, A; Cesta, A

    2000-10-01

    Managers of medico-hospital facilities are facing two general problems when allocating resources to activities: (1) to find an agreement between several and contrasting requirements; (2) to manage dynamic and uncertain situations when constraints suddenly change over time due to medical needs. This paper describes the results of a research aimed at applying constraint-based scheduling techniques to the management of medical resources. A mixed-initiative problem solving approach is adopted in which a user and a decision support system interact to incrementally achieve a satisfactory solution to the problem. A running prototype is described called Interactive Scheduler which offers a set of functionalities for a mixed-initiative interaction to cope with the medical resource management. Interactive Scheduler is endowed with a representation schema used for describing the medical environment, a set of algorithms that address the specific problems of the domain, and an innovative interaction module that offers functionalities for the dialogue between the support system and its user. A particular contribution of this work is the explicit representation of constraint violations, and the definition of scheduling algorithms that aim at minimizing the amount of constraint violations in a solution.

  7. Hyperacute Simultaneous Cardiocerebral Infarction: Rescuing the Brain or the Heart First?

    PubMed

    Kijpaisalratana, Naruchorn; Chutinet, Aurauma; Suwanwela, Nijasri C

    2017-01-01

    Concurrent acute ischemic stroke and acute myocardial infarction is an uncommon medical emergency condition. The challenge for the physicians regarding the management of this situation is paramount since early management of one condition will inevitably delay the other. We present two illustrative cases of "hyperacute simultaneous cardiocerebral infarction" who presented with simultaneous cardiocerebral infarction and arrived at the hospital within the thrombolytic therapeutic window for acute ischemic stroke of 4.5 h. We propose an algorithm for managing the patient with hyperacute simultaneous cardiocerebral infarction based on hemodynamic status and suggest close cardiac monitoring based on the site of cerebral infarction.

  8. Management of Major Vascular Injury During Endoscopic Endonasal Skull Base Surgery.

    PubMed

    Gardner, Paul A; Snyderman, Carl H; Fernandez-Miranda, Juan C; Jankowitz, Brian T

    2016-06-01

    A major vascular injury is the most feared complication of endoscopic sinus and skull base surgery. Risk factors for vascular injury are discussed, and an algorithm for management of a major vascular injury is presented. A team of surgeons (otolaryngology and neurosurgery) is important for identification and control of a major vascular injury applying basic principles of vascular control. A variety of techniques can be used to control a major injury, including coagulation, a muscle patch, sacrifice of the artery, and angiographic stenting. Immediate and close angiographic follow-up is critical to prevent and manage subsequent complications of vascular injury. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Hybrid-optimization algorithm for the management of a conjunctive-use project and well field design

    USGS Publications Warehouse

    Chiu, Yung-Chia; Nishikawa, Tracy; Martin, Peter

    2012-01-01

    Hi‐Desert Water District (HDWD), the primary water‐management agency in the Warren Groundwater Basin, California, plans to construct a waste water treatment plant to reduce future septic‐tank effluent from reaching the groundwater system. The treated waste water will be reclaimed by recharging the groundwater basin via recharge ponds as part of a larger conjunctive‐use strategy. HDWD wishes to identify the least‐cost conjunctive‐use strategies for managing imported surface water, reclaimed water, and local groundwater. As formulated, the mixed‐integer nonlinear programming (MINLP) groundwater‐management problem seeks to minimize water‐delivery costs subject to constraints including potential locations of the new pumping wells, California State regulations, groundwater‐level constraints, water‐supply demand, available imported water, and pump/recharge capacities. In this study, a hybrid‐optimization algorithm, which couples a genetic algorithm and successive‐linear programming, is developed to solve the MINLP problem. The algorithm was tested by comparing results to the enumerative solution for a simplified version of the HDWD groundwater‐management problem. The results indicate that the hybrid‐optimization algorithm can identify the global optimum. The hybrid‐optimization algorithm is then applied to solve a complex groundwater‐management problem. Sensitivity analyses were also performed to assess the impact of varying the new recharge pond orientation, varying the mixing ratio of reclaimed water and pumped water, and varying the amount of imported water available. The developed conjunctive management model can provide HDWD water managers with information that will improve their ability to manage their surface water, reclaimed water, and groundwater resources.

  10. Hybrid-optimization algorithm for the management of a conjunctive-use project and well field design

    USGS Publications Warehouse

    Chiu, Yung-Chia; Nishikawa, Tracy; Martin, Peter

    2012-01-01

    Hi-Desert Water District (HDWD), the primary water-management agency in the Warren Groundwater Basin, California, plans to construct a waste water treatment plant to reduce future septic-tank effluent from reaching the groundwater system. The treated waste water will be reclaimed by recharging the groundwater basin via recharge ponds as part of a larger conjunctive-use strategy. HDWD wishes to identify the least-cost conjunctiveuse strategies for managing imported surface water, reclaimed water, and local groundwater. As formulated, the mixed-integer nonlinear programming (MINLP) groundwater-management problem seeks to minimize water delivery costs subject to constraints including potential locations of the new pumping wells, California State regulations, groundwater-level constraints, water-supply demand, available imported water, and pump/recharge capacities. In this study, a hybrid-optimization algorithm, which couples a genetic algorithm and successive-linear programming, is developed to solve the MINLP problem. The algorithm was tested by comparing results to the enumerative solution for a simplified version of the HDWD groundwater-management problem. The results indicate that the hybrid-optimization algorithm can identify the global optimum. The hybrid-optimization algorithm is then applied to solve a complex groundwater-management problem. Sensitivity analyses were also performed to assess the impact of varying the new recharge pond orientation, varying the mixing ratio of reclaimed water and pumped water, and varying the amount of imported water available. The developed conjunctive management model can provide HDWD water managers with information that will improve their ability to manage their surface water, reclaimed water, and groundwater resources.

  11. Managing and learning with multiple models: Objectives and optimization algorithms

    USGS Publications Warehouse

    Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.

    2011-01-01

    The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.

  12. Special-effect edit detection using VideoTrails: a comparison with existing techniques

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1998-12-01

    Video segmentation plays an integral role in many multimedia applications, such as digital libraries, content management systems, and various other video browsing, indexing, and retrieval systems. Many algorithms for segmentation of video have appeared within the past few years. Most of these algorithms perform well on cuts, but yield poor performance on gradual transitions or special effects edits. A complete video segmentation system must also achieve good performance on special effect edit detection. In this paper, we discuss the performance of our Video Trails-based algorithms, with other existing special effect edit-detection algorithms within the literature. Results from experiments testing for the ability to detect edits from TV programs, ranging from commercials to news magazine programs, including diverse special effect edits, which we have introduced.

  13. Probabilistic streamflow forecasting for hydroelectricity production: A comparison of two non-parametric system identification algorithms

    NASA Astrophysics Data System (ADS)

    Pande, Saket; Sharma, Ashish

    2014-05-01

    This study is motivated by the need to robustly specify, identify, and forecast runoff generation processes for hydroelectricity production. It atleast requires the identification of significant predictors of runoff generation and the influence of each such significant predictor on runoff response. To this end, we compare two non-parametric algorithms of predictor subset selection. One is based on information theory that assesses predictor significance (and hence selection) based on Partial Information (PI) rationale of Sharma and Mehrotra (2014). The other algorithm is based on a frequentist approach that uses bounds on probability of error concept of Pande (2005), assesses all possible predictor subsets on-the-go and converges to a predictor subset in an computationally efficient manner. Both the algorithms approximate the underlying system by locally constant functions and select predictor subsets corresponding to these functions. The performance of the two algorithms is compared on a set of synthetic case studies as well as a real world case study of inflow forecasting. References: Sharma, A., and R. Mehrotra (2014), An information theoretic alternative to model a natural system using observational information alone, Water Resources Research, 49, doi:10.1002/2013WR013845. Pande, S. (2005), Generalized local learning in water resource management, PhD dissertation, Utah State University, UT-USA, 148p.

  14. Reasoning by analogy as an aid to heuristic theorem proving.

    NASA Technical Reports Server (NTRS)

    Kling, R. E.

    1972-01-01

    When heuristic problem-solving programs are faced with large data bases that contain numbers of facts far in excess of those needed to solve any particular problem, their performance rapidly deteriorates. In this paper, the correspondence between a new unsolved problem and a previously solved analogous problem is computed and invoked to tailor large data bases to manageable sizes. This paper outlines the design of an algorithm for generating and exploiting analogies between theorems posed to a resolution-logic system. These algorithms are believed to be the first computationally feasible development of reasoning by analogy to be applied to heuristic theorem proving.

  15. Least mean square fourth based microgrid state estimation algorithm using the internet of things technology

    PubMed Central

    2017-01-01

    This paper proposes an innovative internet of things (IoT) based communication framework for monitoring microgrid under the condition of packet dropouts in measurements. First of all, the microgrid incorporating the renewable distributed energy resources is represented by a state-space model. The IoT embedded wireless sensor network is adopted to sense the system states. Afterwards, the information is transmitted to the energy management system using the communication network. Finally, the least mean square fourth algorithm is explored for estimating the system states. The effectiveness of the developed approach is verified through numerical simulations. PMID:28459848

  16. The congestion control algorithm based on queue management of each node in mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Wei, Yifei; Chang, Lin; Wang, Yali; Wang, Gaoping

    2016-12-01

    This paper proposes an active queue management mechanism, considering the node's own ability and its importance in the network to set the queue threshold. As the network load increases, local congestion of mobile ad hoc network may lead to network performance degradation, hot node's energy consumption increase even failure. If small energy nodes congested because of forwarding data packets, then when it is used as the source node will cause a lot of packet loss. This paper proposes an active queue management mechanism, considering the node's own ability and its importance in the network to set the queue threshold. Controlling nodes buffer queue in different levels of congestion area probability by adjusting the upper limits and lower limits, thus nodes can adjust responsibility of forwarding data packets according to their own situation. The proposed algorithm will slow down the send rate hop by hop along the data package transmission direction from congestion node to source node so that to prevent further congestion from the source node. The simulation results show that, the algorithm can better play the data forwarding ability of strong nodes, protect the weak nodes, can effectively alleviate the network congestion situation.

  17. WATCHMAN: A Data Warehouse Intelligent Cache Manager

    NASA Technical Reports Server (NTRS)

    Scheuermann, Peter; Shim, Junho; Vingralek, Radek

    1996-01-01

    Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.

  18. Landing-Time-Controlled Management Of Air Traffic

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Tobias, Leonard

    1988-01-01

    Conceptual system controls aircraft with old and new guidance equipment. Report begins with overview of concept, then reviews controller-interactive simulations. Describes fuel-conservative-trajectory algorithm, based on equations of motion for controlling landing time. Finally, presents results of piloted simulations.

  19. Insertion algorithms for network model database management systems

    NASA Astrophysics Data System (ADS)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  20. Data quality system using reference dictionaries and edit distance algorithms

    NASA Astrophysics Data System (ADS)

    Karbarz, Radosław; Mulawka, Jan

    2015-09-01

    The real art of management it is important to make smart decisions, what in most of the cases is not a trivial task. Those decisions may lead to determination of production level, funds allocation for investments etc. Most of the parameters in decision-making process such as: interest rate, goods value or exchange rate may change. It is well know that these parameters in the decision-making are based on the data contained in datamarts or data warehouse. However, if the information derived from the processed data sets is the basis for the most important management decisions, it is required that the data is accurate, complete and current. In order to achieve high quality data and to gain from them measurable business benefits, data quality system should be used. The article describes the approach to the problem, shows the algorithms in details and their usage. Finally the test results are provide. Test results show the best algorithms (in terms of quality and quantity) for different parameters and data distribution.

  1. GIS embedded hydrological modeling: the SID&GRID project

    NASA Astrophysics Data System (ADS)

    Borsi, I.; Rossetto, R.; Schifani, C.

    2012-04-01

    The SID&GRID research project, started April 2010 and funded by Regione Toscana (Italy) under the POR FSE 2007-2013, aims to develop a Decision Support System (DSS) for water resource management and planning based on open source and public domain solutions. In order to quantitatively assess water availability in space and time and to support the planning decision processes, the SID&GRID solution consists of hydrological models (coupling 3D existing and newly developed surface- and ground-water and unsaturated zone modeling codes) embedded in a GIS interface, applications and library, where all the input and output data are managed by means of DataBase Management System (DBMS). A graphical user interface (GUI) to manage, analyze and run the SID&GRID hydrological models based on open source gvSIG GIS framework (Asociación gvSIG, 2011) and a Spatial Data Infrastructure to share and interoperate with distributed geographical data is being developed. Such a GUI is thought as a "master control panel" able to guide the user from pre-processing spatial and temporal data, running the hydrological models, and analyzing the outputs. To achieve the above-mentioned goals, the following codes have been selected and are being integrated: 1. Postgresql/PostGIS (PostGIS, 2011) for the Geo Data base Management System; 2. gvSIG with Sextante (Olaya, 2011) geo-algorithm library capabilities and Grass tools (GRASS Development Team, 2011) for the desktop GIS; 3. Geoserver and Geonetwork to share and discover spatial data on the web according to Open Geospatial Consortium; 4. new tools based on the Sextante GeoAlgorithm framework; 5. MODFLOW-2005 (Harbaugh, 2005) groundwater modeling code; 6. MODFLOW-LGR (Mehl and Hill 2005) for local grid refinement; 7. VSF (Thoms et al., 2006) for the variable saturated flow component; 8. new developed routines for overland flow; 9. new algorithms in Jython integrated in gvSIG to compute the net rainfall rate reaching the soil surface, as input for the unsaturated/saturated flow model. At this stage of the research (which will end April 2013), two primary components of the master control panel are being developed: i. a SID&GRID toolbar integrated into gvSIG map context; ii. a new Sextante set of geo-algorithm to pre- and post-process the spatial data to run the hydrological models. The groundwater part of the code has been fully integrated and tested and 3D visualization tools are being developed. The LGR capability has been extended to the 3D solution of the Richards' equation in order to solve in detail the unsaturated zone where required. To be updated about the project, please follow us at the website: http://ut11.isti.cnr.it/SIDGRID/

  2. Development of a meta-algorithm for guiding primary care encounters for patients with multimorbidity using evidence-based and case-based guideline development methodology.

    PubMed

    Muche-Borowski, Cathleen; Lühmann, Dagmar; Schäfer, Ingmar; Mundt, Rebekka; Wagner, Hans-Otto; Scherer, Martin

    2017-06-22

    The study aimed to develop a comprehensive algorithm (meta-algorithm) for primary care encounters of patients with multimorbidity. We used a novel, case-based and evidence-based procedure to overcome methodological difficulties in guideline development for patients with complex care needs. Systematic guideline development methodology including systematic evidence retrieval (guideline synopses), expert opinions and informal and formal consensus procedures. Primary care. The meta-algorithm was developed in six steps:1. Designing 10 case vignettes of patients with multimorbidity (common, epidemiologically confirmed disease patterns and/or particularly challenging health care needs) in a multidisciplinary workshop.2. Based on the main diagnoses, a systematic guideline synopsis of evidence-based and consensus-based clinical practice guidelines was prepared. The recommendations were prioritised according to the clinical and psychosocial characteristics of the case vignettes.3. Case vignettes along with the respective guideline recommendations were validated and specifically commented on by an external panel of practicing general practitioners (GPs).4. Guideline recommendations and experts' opinions were summarised as case specific management recommendations (N-of-one guidelines).5. Healthcare preferences of patients with multimorbidity were elicited from a systematic literature review and supplemented with information from qualitative interviews.6. All N-of-one guidelines were analysed using pattern recognition to identify common decision nodes and care elements. These elements were put together to form a generic meta-algorithm. The resulting meta-algorithm reflects the logic of a GP's encounter of a patient with multimorbidity regarding decision-making situations, communication needs and priorities. It can be filled with the complex problems of individual patients and hereby offer guidance to the practitioner. Contrary to simple, symptom-oriented algorithms, the meta-algorithm illustrates a superordinate process that permanently keeps the entire patient in view. The meta-algorithm represents the back bone of the multimorbidity guideline of the German College of General Practitioners and Family Physicians. This article presents solely the development phase; the meta-algorithm needs to be piloted before it can be implemented. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Recommendation System Based On Association Rules For Distributed E-Learning Management Systems

    NASA Astrophysics Data System (ADS)

    Mihai, Gabroveanu

    2015-09-01

    Traditional Learning Management Systems are installed on a single server where learning materials and user data are kept. To increase its performance, the Learning Management System can be installed on multiple servers; learning materials and user data could be distributed across these servers obtaining a Distributed Learning Management System. In this paper is proposed the prototype of a recommendation system based on association rules for Distributed Learning Management System. Information from LMS databases is analyzed using distributed data mining algorithms in order to extract the association rules. Then the extracted rules are used as inference rules to provide personalized recommendations. The quality of provided recommendations is improved because the rules used to make the inferences are more accurate, since these rules aggregate knowledge from all e-Learning systems included in Distributed Learning Management System.

  4. Implementation of Digital Signature Using Aes and Rsa Algorithms as a Security in Disposition System af Letter

    NASA Astrophysics Data System (ADS)

    Siregar, H.; Junaeti, E.; Hayatno, T.

    2017-03-01

    Activities correspondence is often used by agencies or companies, so that institutions or companies set up a special division to handle issues related to the letter management. Most of the distribution of letters using electronic media, then the letter should be kept confidential in order to avoid things that are not desirable. Techniques that can be done to meet the security aspect is by using cryptography or by giving a digital signature. The addition of asymmetric and symmetric algorithms, i.e. RSA and AES algorithms, on the digital signature had been done in this study to maintain data security. The RSA algorithm was used during the process of giving digital signature, while the AES algorithm was used during the process of encoding a message that will be sent to the receiver. Based on the research can be concluded that the additions of AES and RSA algorithms on the digital signature meet four objectives of cryptography: Secrecy, Data Integrity, Authentication and Non-repudiation.

  5. Derived crop management data for the LandCarbon Project

    USGS Publications Warehouse

    Schmidt, Gail; Liu, Shu-Guang; Oeding, Jennifer

    2011-01-01

    The LandCarbon project is assessing potential carbon pools and greenhouse gas fluxes under various scenarios and land management regimes to provide information to support the formulation of policies governing climate change mitigation, adaptation and land management strategies. The project is unique in that spatially explicit maps of annual land cover and land-use change are created at the 250-meter pixel resolution. The project uses vast amounts of data as input to the models, including satellite, climate, land cover, soil, and land management data. Management data have been obtained from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS) and USDA Economic Research Service (ERS) that provides information regarding crop type, crop harvesting, manure, fertilizer, tillage, and cover crop (U.S. Department of Agriculture, 2011a, b, c). The LandCarbon team queried the USDA databases to pull historic crop-related management data relative to the needs of the project. The data obtained was in table form with the County or State Federal Information Processing Standard (FIPS) and the year as the primary and secondary keys. Future projections were generated for the A1B, A2, B1, and B2 Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (SRES) scenarios using the historic data values along with coefficients generated by the project. The PBL Netherlands Environmental Assessment Agency (PBL) Integrated Model to Assess the Global Environment (IMAGE) modeling framework (Integrated Model to Assess the Global Environment, 2006) was used to develop coefficients for each IPCC SRES scenario, which were applied to the historic management data to produce future land management practice projections. The LandCarbon project developed algorithms for deriving gridded data, using these tabular management data products as input. The derived gridded crop type, crop harvesting, manure, fertilizer, tillage, and cover crop products are used as input to the LandCarbon models to represent the historic and the future scenario management data. The overall algorithm to generate each of the gridded management products is based on the land cover and the derived crop type. For each year in the land cover dataset, the algorithm loops through each 250-meter pixel in the ecoregion. If the current pixel in the land cover dataset is an agriculture pixel, then the crop type is determined. Once the crop type is derived, then the crop harvest, manure, fertilizer, tillage, and cover crop values are derived independently for that crop type. The following is the overall algorithm used for the set of derived grids. The specific algorithm to generate each management dataset is discussed in the respective section for that dataset, along with special data handling and a description of the output product.

  6. Dysphagia in Duchenne muscular dystrophy: practical recommendations to guide management.

    PubMed

    Toussaint, Michel; Davidson, Zoe; Bouvoie, Veronique; Evenepoel, Nathalie; Haan, Jurn; Soudon, Philippe

    2016-10-01

    Duchenne muscular dystrophy (DMD) is a rapidly progressive neuromuscular disorder causing weakness of the skeletal, respiratory, cardiac and oropharyngeal muscles with up to one third of young men reporting difficulty swallowing (dysphagia). Recent studies on dysphagia in DMD clarify the pathophysiology of swallowing disorders and offer new tools for its assessment but little guidance is available for its management. This paper aims to provide a step-by-step algorithm to facilitate clinical decisions regarding dysphagia management in this patient population. This algorithm is based on 30 years of clinical experience with DMD in a specialised Centre for Neuromuscular Disorders (Inkendaal Rehabilitation Hospital, Belgium) and is supported by literature where available. Dysphagia can worsen the condition of ageing patients with DMD. Apart from the difficulties of chewing and oral fragmentation of the food bolus, dysphagia is rather a consequence of an impairment in the pharyngeal phase of swallowing. By contrast with central neurologic disorders, dysphagia in DMD accompanies solid rather than liquid intake. Symptoms of dysphagia may not be clinically evident; however laryngeal food penetration, accumulation of food residue in the pharynx and/or true laryngeal food aspiration may occur. The prevalence of these issues in DMD is likely underestimated. There is little guidance available for clinicians to manage dysphagia and improve feeding for young men with DMD. This report aims to provide a clinical algorithm to facilitate the diagnosis of dysphagia, to identify the symptoms and to propose practical recommendations to treat dysphagia in the adult DMD population. Implications for Rehabilitation Little guidance is available for the management of dysphagia in Duchenne dystrophy. Food can penetrate the vestibule, accumulate as residue or cause aspiration. We propose recommendations and an algorithm to guide management of dysphagia. Penetration/residue accumulation: prohibit solid food and promote intake of fluids. Aspiration: if cough augmentation techniques are ineffective, consider tracheostomy.

  7. Dysphagia in Duchenne muscular dystrophy: practical recommendations to guide management

    PubMed Central

    Toussaint, Michel; Davidson, Zoe; Bouvoie, Veronique; Evenepoel, Nathalie; Haan, Jurn; Soudon, Philippe

    2016-01-01

    Abstract Purpose: Duchenne muscular dystrophy (DMD) is a rapidly progressive neuromuscular disorder causing weakness of the skeletal, respiratory, cardiac and oropharyngeal muscles with up to one third of young men reporting difficulty swallowing (dysphagia). Recent studies on dysphagia in DMD clarify the pathophysiology of swallowing disorders and offer new tools for its assessment but little guidance is available for its management. This paper aims to provide a step-by-step algorithm to facilitate clinical decisions regarding dysphagia management in this patient population. Methods: This algorithm is based on 30 years of clinical experience with DMD in a specialised Centre for Neuromuscular Disorders (Inkendaal Rehabilitation Hospital, Belgium) and is supported by literature where available. Results: Dysphagia can worsen the condition of ageing patients with DMD. Apart from the difficulties of chewing and oral fragmentation of the food bolus, dysphagia is rather a consequence of an impairment in the pharyngeal phase of swallowing. By contrast with central neurologic disorders, dysphagia in DMD accompanies solid rather than liquid intake. Symptoms of dysphagia may not be clinically evident; however laryngeal food penetration, accumulation of food residue in the pharynx and/or true laryngeal food aspiration may occur. The prevalence of these issues in DMD is likely underestimated. Conclusions: There is little guidance available for clinicians to manage dysphagia and improve feeding for young men with DMD. This report aims to provide a clinical algorithm to facilitate the diagnosis of dysphagia, to identify the symptoms and to propose practical recommendations to treat dysphagia in the adult DMD population.Implications for RehabilitationLittle guidance is available for the management of dysphagia in Duchenne dystrophy.Food can penetrate the vestibule, accumulate as residue or cause aspiration.We propose recommendations and an algorithm to guide management of dysphagia.Penetration/residue accumulation: prohibit solid food and promote intake of fluids.Aspiration: if cough augmentation techniques are ineffective, consider tracheostomy. PMID:26728920

  8. Infliximab-Related Infusion Reactions: Systematic Review

    PubMed Central

    Ron, Yulia; Kivity, Shmuel; Ben-Horin, Shomron; Israeli, Eran; Fraser, Gerald M.; Dotan, Iris; Chowers, Yehuda; Confino-Cohen, Ronit; Weiss, Batia

    2015-01-01

    Objective: Administration of infliximab is associated with a well-recognised risk of infusion reactions. Lack of a mechanism-based rationale for their prevention, and absence of adequate and well-controlled studies, has led to the use of diverse empirical administration protocols. The aim of this study is to perform a systematic review of the evidence behind the strategies for preventing infusion reactions to infliximab, and for controlling the reactions once they occur. Methods: We conducted extensive search of electronic databases of MEDLINE [PubMed] for reports that communicate various aspects of infusion reactions to infliximab in IBD patients. Results: We examined full texts of 105 potentially eligible articles. No randomised controlled trials that pre-defined infusion reaction as a primary outcome were found. Three RCTs evaluated infusion reactions as a secondary outcome; another four RCTs included infusion reactions in the safety evaluation analysis; and 62 additional studies focused on various aspects of mechanism/s, risk, primary and secondary preventive measures, and management algorithms. Seven studies were added by a manual search of reference lists of the relevant articles. A total of 76 original studies were included in quantitative analysis of the existing strategies. Conclusions: There is still paucity of systematic and controlled data on the risk, prevention, and management of infusion reactions to infliximab. We present working algorithms based on systematic and extensive review of the available data. More randomised controlled trials are needed in order to investigate the efficacy of the proposed preventive and management algorithms. PMID:26092578

  9. Corpus-based Customization for an Ontology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2010-09-14

    CCAT scans a corpus of text for terms, and computes lexical similarity between corpus terms and taxonomy terms. Based on a set of metrics and a learning algorithm, the system inserts corpus terms into the taxonomy. Conversely, terms from the taxonomy are disambiguated based on the text in the corpus. Unused terms are discarded, and infrequently used senses of terms are collapsed to make the taxonomy more manageable.

  10. Planning the FUSE Mission Using the SOVA Algorithm

    NASA Technical Reports Server (NTRS)

    Lanzi, James; Heatwole, Scott; Ward, Philip R.; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly

    2011-01-01

    Three documents discuss the Sustainable Objective Valuation and Attainability (SOVA) algorithm and software as used to plan tasks (principally, scientific observations and associated maneuvers) for the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. SOVA is a means of managing risk in a complex system, based on a concept of computing the expected return value of a candidate ordered set of tasks as a product of pre-assigned task values and assessments of attainability made against qualitatively defined strategic objectives. For the FUSE mission, SOVA autonomously assembles a week-long schedule of target observations and associated maneuvers so as to maximize the expected scientific return value while keeping the satellite stable, managing the angular momentum of spacecraft attitude- control reaction wheels, and striving for other strategic objectives. A six-degree-of-freedom model of the spacecraft is used in simulating the tasks, and the attainability of a task is calculated at each step by use of strategic objectives as defined by use of fuzzy inference systems. SOVA utilizes a variant of a graph-search algorithm known as the A* search algorithm to assemble the tasks into a week-long target schedule, using the expected scientific return value to guide the search.

  11. LUMIS: Land Use Management and Information Systems; coordinate oriented program documentation

    NASA Technical Reports Server (NTRS)

    1976-01-01

    An integrated geographic information system to assist program managers and planning groups in metropolitan regions is presented. The series of computer software programs and procedures involved in data base construction uses the census DIME file and point-in-polygon architectures. The system is described in two parts: (1) instructions to operators with regard to digitizing and editing procedures, and (2) application of data base construction algorithms to achieve map registration, assure the topological integrity of polygon files, and tabulate land use acreages within administrative districts.

  12. Dispatch Strategy Development for Grid-tied Household Energy Systems

    NASA Astrophysics Data System (ADS)

    Cardwell, Joseph

    The prevalence of renewable generation will increase in the next several decades and offset conventional generation more and more. Yet this increase is not coming without challenges. Solar, wind, and even some water resources are intermittent and unpredictable, and thereby create scheduling challenges due to their inherent "uncontrolled" nature. To effectively manage these distributed renewable assets, new control algorithms must be developed for applications including energy management, bridge power, and system stability. This can be completed through a centralized control center though efforts are being made to parallel the control architecture with the organization of the renewable assets themselves--namely, distributed controls. Building energy management systems are being employed to control localized energy generation, storage, and use to reduce disruption on the net utility load. One such example is VOLTTRONTM, an agent-based platform for building energy control in real time. In this thesis, algorithms developed in VOLTTRON simulate a home energy management system that consists of a solar PV array, a lithium-ion battery bank, and the grid. Dispatch strategies are implemented to reduce energy charges from overall consumption (/kWh) and demand charges (/kW). Dispatch strategies for implementing storage devices are tuned on a month-to-month basis to provide a meaningful economic advantage under simulated scenarios to explore algorithm sensitivity to changing external factors. VOLTTRON agents provide automated real-time optimization of dispatch strategies to efficiently manage energy supply and demand, lower consumer costs associated with energy usage, and reduce load on the utility grid.

  13. Medial elbow injury in young throwing athletes

    PubMed Central

    Gregory, Bonnie; Nyland, John

    2013-01-01

    Summary This report reviews the anatomy, overhead throwing biomechanics, injury mechanism and incidence, physical examination and diagnosis, diagnostic imaging and conservative treatment of medial elbow injuries in young throwing athletes. Based on the information a clinical management decision-making algorithm is presented. PMID:23888291

  14. Combining spatial and spectral information to improve crop/weed discrimination algorithms

    NASA Astrophysics Data System (ADS)

    Yan, L.; Jones, G.; Villette, S.; Paoli, J. N.; Gée, C.

    2012-01-01

    Reduction of herbicide spraying is an important key to environmentally and economically improve weed management. To achieve this, remote sensors such as imaging systems are commonly used to detect weed plants. We developed spatial algorithms that detect the crop rows to discriminate crop from weeds. These algorithms have been thoroughly tested and provide robust and accurate results without learning process but their detection is limited to inter-row areas. Crop/Weed discrimination using spectral information is able to detect intra-row weeds but generally needs a prior learning process. We propose a method based on spatial and spectral information to enhance the discrimination and overcome the limitations of both algorithms. The classification from the spatial algorithm is used to build the training set for the spectral discrimination method. With this approach we are able to improve the range of weed detection in the entire field (inter and intra-row). To test the efficiency of these algorithms, a relevant database of virtual images issued from SimAField model has been used and combined to LOPEX93 spectral database. The developed method based is evaluated and compared with the initial method in this paper and shows an important enhancement from 86% of weed detection to more than 95%.

  15. Programming Deep Brain Stimulation for Parkinson's Disease: The Toronto Western Hospital Algorithms.

    PubMed

    Picillo, Marina; Lozano, Andres M; Kou, Nancy; Puppi Munhoz, Renato; Fasano, Alfonso

    2016-01-01

    Deep brain stimulation (DBS) is an established and effective treatment for Parkinson's disease (PD). After surgery, a number of extensive programming sessions are performed to define the most optimal stimulation parameters. Programming sessions mainly rely only on neurologist's experience. As a result, patients often undergo inconsistent and inefficient stimulation changes, as well as unnecessary visits. We reviewed the literature on initial and follow-up DBS programming procedures and integrated our current practice at Toronto Western Hospital (TWH) to develop standardized DBS programming protocols. We propose four algorithms including the initial programming and specific algorithms tailored to symptoms experienced by patients following DBS: speech disturbances, stimulation-induced dyskinesia and gait impairment. We conducted a literature search of PubMed from inception to July 2014 with the keywords "deep brain stimulation", "festination", "freezing", "initial programming", "Parkinson's disease", "postural instability", "speech disturbances", and "stimulation induced dyskinesia". Seventy papers were considered for this review. Based on the literature review and our experience at TWH, we refined four algorithms for: (1) the initial programming stage, and management of symptoms following DBS, particularly addressing (2) speech disturbances, (3) stimulation-induced dyskinesia, and (4) gait impairment. We propose four algorithms tailored to an individualized approach to managing symptoms associated with DBS and disease progression in patients with PD. We encourage established as well as new DBS centers to test the clinical usefulness of these algorithms in supplementing the current standards of care. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Predicting hospitalization due to worsening heart failure using daily weight measurement: analysis of the Trans-European Network-Home-Care Management System (TEN-HMS) study.

    PubMed

    Zhang, Jufen; Goode, Kevin M; Cuddihy, Paul E; Cleland, John G F

    2009-04-01

    We sought to test the utility of weight gain algorithms to predict episodes of worsening heart failure (WHF) using home-telemonitoring data collected as part of the TEN-HMS study. Simple rule-of-thumb (RoT) algorithms (i.e. 3 lbs in 1 day and 5 lbs in 3 days) and a moving average convergence divergence (MACD) algorithm were compared. WHF was defined as hospitalization for WHF or worsening of breathlessness or leg oedema. Of 168 patients, 45 were hospitalized with WHF and 76 were hospitalized for other reasons. On average, weight gain occurred in the 14 days prior to WHF hospitalizations but not in the 14 days prior to non-WHF hospitalizations [1.9 +/- 4.7 lbs (0.9 +/- 2.1 kg) vs. -0.4 +/- 2.5 lbs (-0.2 +/- 1.1 kg), P < 0.0001]. The true alerts rate was higher for the RoT algorithms compared with the MACD (58 and 65% vs. 20%). However, the RoT algorithms had much higher false alert rates (54 and 58% vs. 9%) rendering them of little practical use for predicting WHF events. A MACD algorithm is more specific but less sensitive than RoT when trying to predict episodes of WHF based on daily weight measurements. However, many episodes of WHF do not appear to be associated with weight gain and therefore telemonitoring of weight alone may not have great value for heart failure management.

  17. Integrated G and C Implementation within IDOS: A Simulink Based Reusable Launch Vehicle Simulation

    NASA Technical Reports Server (NTRS)

    Fisher, Joseph E.; Bevacqua, Tim; Lawrence, Douglas A.; Zhu, J. Jim; Mahoney, Michael

    2003-01-01

    The implementation of multiple Integrated Guidance and Control (IG&C) algorithms per flight phase within a vehicle simulation poses a daunting task to coordinate algorithm interactions with the other G&C components and with vehicle subsystems. Currently being developed by Universal Space Lines LLC (USL) under contract from NASA, the Integrated Development and Operations System (IDOS) contains a high fidelity Simulink vehicle simulation, which provides a means to test cutting edge G&C technologies. Combining the modularity of this vehicle simulation and Simulink s built-in primitive blocks provide a quick way to implement algorithms. To add discrete-event functionality to the unfinished IDOS simulation, Vehicle Event Manager (VEM) and Integrated Vehicle Health Monitoring (IVHM) subsystems were created to provide discrete-event and pseudo-health monitoring processing capabilities. Matlab's Stateflow is used to create the IVHM and Event Manager subsystems and to implement a supervisory logic controller referred to as the Auto-commander as part of the IG&C to coordinate the control system adaptation and reconfiguration and to select the control and guidance algorithms for a given flight phase. Manual creation of the Stateflow charts for all of these subsystems is a tedious and time-consuming process. The Stateflow Auto-builder was developed as a Matlab based software tool for the automatic generation of a Stateflow chart from information contained in a database. This paper describes the IG&C, VEM and IVHM implementations in IDOS. In addition, this paper describes the Stateflow Auto-builder.

  18. Intelligent agent-based intrusion detection system using enhanced multiclass SVM.

    PubMed

    Ganapathy, S; Yogesh, P; Kannan, A

    2012-01-01

    Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set.

  19. Intelligent Agent-Based Intrusion Detection System Using Enhanced Multiclass SVM

    PubMed Central

    Ganapathy, S.; Yogesh, P.; Kannan, A.

    2012-01-01

    Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set. PMID:23056036

  20. Nuclear fuel management optimization using genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1995-07-01

    The code independent genetic algorithm reactor optimization (CIGARO) system has been developed to optimize nuclear reactor loading patterns. It uses genetic algorithms (GAs) and a code-independent interface, so any reactor physics code (e.g., CASMO-3/SIMULATE-3) can be used to evaluate the loading patterns. The system is compared to other GA-based loading pattern optimizers. Tests were carried out to maximize the beginning of cycle k{sub eff} for a pressurized water reactor core loading with a penalty function to limit power peaking. The CIGARO system performed well, increasing the k{sub eff} after lowering the peak power. Tests of a prototype parallel evaluation methodmore » showed the potential for a significant speedup.« less

  1. Modeling of biological intelligence for SCM system optimization.

    PubMed

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms.

  2. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.

    PubMed

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2014-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.

  3. Modeling of Biological Intelligence for SCM System Optimization

    PubMed Central

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724

  4. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Xu, Yiming; Liu, Jun

    2017-01-01

    For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes’ being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network’s best service quality and lifetime. PMID:28106837

  5. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks.

    PubMed

    Jiang, Peng; Xu, Yiming; Liu, Jun

    2017-01-19

    For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes' being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network's best service quality and lifetime.

  6. Evaluating water management strategies in watersheds by new hybrid Fuzzy Analytical Network Process (FANP) methods

    NASA Astrophysics Data System (ADS)

    RazaviToosi, S. L.; Samani, J. M. V.

    2016-03-01

    Watersheds are considered as hydrological units. Their other important aspects such as economic, social and environmental functions play crucial roles in sustainable development. The objective of this work is to develop methodologies to prioritize watersheds by considering different development strategies in environmental, social and economic sectors. This ranking could play a significant role in management to assign the most critical watersheds where by employing water management strategies, best condition changes are expected to be accomplished. Due to complex relations among different criteria, two new hybrid fuzzy ANP (Analytical Network Process) algorithms, fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and fuzzy max-min set methods are used to provide more flexible and accurate decision model. Five watersheds in Iran named Oroomeyeh, Atrak, Sefidrood, Namak and Zayandehrood are considered as alternatives. Based on long term development goals, 38 water management strategies are defined as subcriteria in 10 clusters. The main advantage of the proposed methods is its ability to overcome uncertainty. This task is accomplished by using fuzzy numbers in all steps of the algorithms. To validate the proposed method, the final results were compared with those obtained from the ANP algorithm and the Spearman rank correlation coefficient is applied to find the similarity in the different ranking methods. Finally, the sensitivity analysis was conducted to investigate the influence of cluster weights on the final ranking.

  7. Realization of daily evapotranspiration in arid ecosystems based on remote sensing techniques

    NASA Astrophysics Data System (ADS)

    Elhag, Mohamed; Bahrawi, Jarbou A.

    2017-03-01

    Daily evapotranspiration is a major component of water resources management plans. In arid ecosystems, the quest for an efficient water budget is always hard to achieve due to insufficient irrigational water and high evapotranspiration rates. Therefore, monitoring of daily evapotranspiration is a key practice for sustainable water resources management, especially in arid environments. Remote sensing techniques offered a great help to estimate the daily evapotranspiration on a regional scale. Existing open-source algorithms proved to estimate daily evapotranspiration comprehensively in arid environments. The only deficiency of these algorithms is the course scale of the used remote sensing data. Consequently, the adequate downscaling algorithm is a compulsory step to rationalize an effective water resources management plan. Daily evapotranspiration was estimated fairly well using an Advance Along-Track Scanner Radiometer (AATSR) in conjunction with (MEdium Resolution Imaging Spectrometer) MERIS data acquired in July 2013 with 1 km spatial resolution and 3 days of temporal resolution under a surface energy balance system (SEBS) model. Results were validated against reference evapotranspiration ground truth values using standardized Penman-Monteith method with R2 of 0.879. The findings of the current research successfully monitor turbulent heat fluxes values estimated from AATSR and MERIS data with a temporal resolution of 3 days only in conjunction with reliable meteorological data. Research verdicts are necessary inputs for a well-informed decision-making processes regarding sustainable water resource management.

  8. NASA Remote Sensing Data in Earth Sciences: Processing, Archiving, Distribution, Applications at the GES DISC

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory G.

    2005-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is one of the major Distributed Active Archive Centers (DAACs) archiving and distributing remote sensing data from the NASA's Earth Observing System. In addition to providing just data, the GES DISC/DAAC has developed various value-adding processing services. A particularly useful service is data processing a t the DISC (i.e., close to the input data) with the users' algorithms. This can take a number of different forms: as a configuration-managed algorithm within the main processing stream; as a stand-alone program next to the on-line data storage; as build-it-yourself code within the Near-Archive Data Mining (NADM) system; or as an on-the-fly analysis with simple algorithms embedded into the web-based tools (to avoid downloading unnecessary all the data). The existing data management infrastructure at the GES DISC supports a wide spectrum of options: from data subsetting data spatially and/or by parameter to sophisticated on-line analysis tools, producing economies of scale and rapid time-to-deploy. Shifting processing and data management burden from users to the GES DISC, allows scientists to concentrate on science, while the GES DISC handles the data management and data processing at a lower cost. Several examples of successful partnerships with scientists in the area of data processing and mining are presented.

  9. Hybrid protection algorithms based on game theory in multi-domain optical networks

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Wu, Jingjing; Hou, Weigang; Liu, Yejun; Zhang, Lincong; Li, Hongming

    2011-12-01

    With the network size increasing, the optical backbone is divided into multiple domains and each domain has its own network operator and management policy. At the same time, the failures in optical network may lead to a huge data loss since each wavelength carries a lot of traffic. Therefore, the survivability in multi-domain optical network is very important. However, existing survivable algorithms can achieve only the unilateral optimization for profit of either users or network operators. Then, they cannot well find the double-win optimal solution with considering economic factors for both users and network operators. Thus, in this paper we develop the multi-domain network model with involving multiple Quality of Service (QoS) parameters. After presenting the link evaluation approach based on fuzzy mathematics, we propose the game model to find the optimal solution to maximize the user's utility, the network operator's utility, and the joint utility of user and network operator. Since the problem of finding double-win optimal solution is NP-complete, we propose two new hybrid protection algorithms, Intra-domain Sub-path Protection (ISP) algorithm and Inter-domain End-to-end Protection (IEP) algorithm. In ISP and IEP, the hybrid protection means that the intelligent algorithm based on Bacterial Colony Optimization (BCO) and the heuristic algorithm are used to solve the survivability in intra-domain routing and inter-domain routing, respectively. Simulation results show that ISP and IEP have the similar comprehensive utility. In addition, ISP has better resource utilization efficiency, lower blocking probability, and higher network operator's utility, while IEP has better user's utility.

  10. Skull base osteomyelitis: current microbiology and management.

    PubMed

    Spielmann, P M; Yu, R; Neeff, M

    2013-01-01

    Skull base osteomyelitis typically presents in an immunocompromised patient with severe otalgia and otorrhoea. Pseudomonas aeruginosa is the commonest pathogenic micro-organism, and reports of resistance to fluoroquinolones are now emerging, complicating management. We reviewed our experience of this condition, and of the local pathogenic organisms. A retrospective review from 2004 to 2011 was performed. Patients were identified by their admission diagnostic code, and computerised records examined. Twenty patients were identified. A facial palsy was present in 12 patients (60 per cent). Blood cultures were uniformly negative, and culture of ear canal granulations was non-diagnostic in 71 per cent of cases. Pseudomonas aeruginosa was isolated in only 10 (50 per cent) cases; one strain was resistant to ciprofloxacin but all were sensitive to ceftazidime. Two cases of fungal skull base osteomyelitis were identified. The mortality rate was 15 per cent. The patients' treatment algorithm is presented. Our treatment algorithm reflects the need for multidisciplinary input, early microbial culture of specimens, appropriate imaging, and prolonged and systemic antimicrobial treatment. Resolution of infection must be confirmed by close follow up and imaging.

  11. SLS Model Based Design: A Navigation Perspective

    NASA Technical Reports Server (NTRS)

    Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin

    2018-01-01

    The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.

  12. D Data Acquisition Based on Opencv for Close-Range Photogrammetry Applications

    NASA Astrophysics Data System (ADS)

    Jurjević, L.; Gašparović, M.

    2017-05-01

    Development of the technology in the area of the cameras, computers and algorithms for 3D the reconstruction of the objects from the images resulted in the increased popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so advanced that almost anyone can make a 3D model of photographed object. The main goal of this paper is to examine the possibility of obtaining 3D data for the purposes of the close-range photogrammetry applications, based on the open source technologies. All steps of obtaining 3D point cloud are covered in this paper. Special attention is given to the camera calibration, for which two-step process of calibration is used. Both, presented algorithm and accuracy of the point cloud are tested by calculating the spatial difference between referent and produced point clouds. During algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly usage of this and similar algorithms has a lot of potential in the real-time application. That is the reason why this research can find its application in the architecture, spatial planning, protection of cultural heritage, forensic, mechanical engineering, traffic management, medicine and other sciences.

  13. A modified MOD16 algorithm to estimate evapotranspiration over alpine meadow on the Tibetan Plateau, China

    NASA Astrophysics Data System (ADS)

    Chang, Yaping; Qin, Dahe; Ding, Yongjian; Zhao, Qiudong; Zhang, Shiqiang

    2018-06-01

    The long-term change of evapotranspiration (ET) is crucial for managing water resources in areas with extreme climates, such as the Tibetan Plateau (TP). This study proposed a modified algorithm for estimating ET based on the MOD16 algorithm on a global scale over alpine meadow on the TP in China. Wind speed and vegetation height were integrated to estimate aerodynamic resistance, while the temperature and moisture constraints for stomatal conductance were revised based on the technique proposed by Fisher et al. (2008). Moreover, Fisher's method for soil evaporation was adopted to reduce the uncertainty in soil evaporation estimation. Five representative alpine meadow sites on the TP were selected to investigate the performance of the modified algorithm. Comparisons were made between the ET observed using the Eddy Covariance (EC) and estimated using both the original and modified algorithms. The results revealed that the modified algorithm performed better than the original MOD16 algorithm with the coefficient of determination (R2) increasing from 0.26 to 0.68, and root mean square error (RMSE) decreasing from 1.56 to 0.78 mm d-1. The modified algorithm performed slightly better with a higher R2 (0.70) and lower RMSE (0.61 mm d-1) for after-precipitation days than for non-precipitation days at Suli site. Contrarily, better results were obtained for non-precipitation days than for after-precipitation days at Arou, Tanggula, and Hulugou sites, indicating that the modified algorithm may be more suitable for estimating ET for non-precipitation days with higher accuracy than for after-precipitation days, which had large observation errors. The comparisons between the modified algorithm and two mainstream methods suggested that the modified algorithm could produce high accuracy ET over the alpine meadow sites on the TP.

  14. Integrative review on the non-invasive management of lower urinary tract symptoms in men following treatments for pelvic malignancies.

    PubMed

    Faithfull, S; Lemanska, A; Aslet, P; Bhatt, N; Coe, J; Drudge-Coates, L; Feneley, M; Glynn-Jones, R; Kirby, M; Langley, S; McNicholas, T; Newman, J; Smith, C C; Sahai, A; Trueman, E; Payne, H

    2015-10-01

    To develop a non-invasive management strategy for men with lower urinary tract symptoms (LUTS) after treatment for pelvic cancer, that is suitable for use in a primary healthcare context. PubMed literature searches of LUTS management in this patient group were carried out, together with obtaining a consensus of management strategies from a panel of authors for the management of LUTS from across the UK. Data from 41 articles were investigated and collated. Clinical experience was sought from authors where there was no clinical evidence. The findings discussed in this paper confirm that LUTS after the cancer treatment can significantly impair men's quality of life. While many men recover from LUTS spontaneously over time, a significant proportion require long-term management. Despite the prevalence of LUTS, there is a lack of consensus on best management. This article offers a comprehensive treatment algorithm to manage patients with LUTS following pelvic cancer treatment. Based on published research literature and clinical experience, recommendations are proposed for the standardisation of management strategies employed for men with LUTS after the pelvic cancer treatment. In addition to implementing the algorithm, understanding the rationale for the type and timing of LUTS management strategies is crucial for clinicians and patients. © 2015 The Authors. International Journal of Clinical Practice Published by John Wiley & Sons Ltd.

  15. Identification of Patients with Family History of Pancreatic Cancer--Investigation of an NLP System Portability.

    PubMed

    Mehrabi, Saeed; Krishnan, Anand; Roch, Alexandra M; Schmidt, Heidi; Li, DingCheng; Kesterson, Joe; Beesley, Chris; Dexter, Paul; Schmidt, Max; Palakal, Mathew; Liu, Hongfang

    2015-01-01

    In this study we have developed a rule-based natural language processing (NLP) system to identify patients with family history of pancreatic cancer. The algorithm was developed in a Unstructured Information Management Architecture (UIMA) framework and consisted of section segmentation, relation discovery, and negation detection. The system was evaluated on data from two institutions. The family history identification precision was consistent across the institutions shifting from 88.9% on Indiana University (IU) dataset to 87.8% on Mayo Clinic dataset. Customizing the algorithm on the the Mayo Clinic data, increased its precision to 88.1%. The family member relation discovery achieved precision, recall, and F-measure of 75.3%, 91.6% and 82.6% respectively. Negation detection resulted in precision of 99.1%. The results show that rule-based NLP approaches for specific information extraction tasks are portable across institutions; however customization of the algorithm on the new dataset improves its performance.

  16. DualTrust: A Trust Management Model for Swarm-Based Autonomic Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maiden, Wendy M.

    Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm -- their lightweight, ephemeral nature and indirect communication -- make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarm-based autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and stillmore » serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarm-based autonomic computing system and describes a trust model that meets these requirements.« less

  17. Echocardiography-based hemodynamic management of left ventricular diastolic dysfunction: a feasibility and safety study.

    PubMed

    Shillcutt, Sasha K; Montzingo, Candice R; Agrawal, Ankit; Khaleel, Maseeha S; Therrien, Stacey L; Thomas, Walker R; Porter, Thomas R; Brakke, Tara R

    2014-11-01

    Patients with left ventricular diastolic dysfunction (LVDD) are at increased risk of postoperative adverse events. The primary aim of this study was to evaluate the safety and feasibility of using echocardiography-guided hemodynamic management (EGHEM) during surgery in subjects with LVDD compared to conventional management. The feasibility of using echocardiography to direct a treatment algorithm and clinical outcomes were compared for safety between groups. Subjects were screened for LVDD by preoperative transthoracic echocardiography (TTE) and randomized to the conventional or EGHEM group. Subjects in EGHEM received hemodynamic management based on left ventricular filling patterns on transesophageal echocardiography (TEE). Primary outcomes measured were the feasibility to obtain TEE images and follow a TEE-based treatment algorithm. Safety outcomes also compared the following clinical differences between groups: length of hospitalization, incidence of atrial fibrillation, congestive heart failure (CHF), myocardial infarction, cerebrovascular accident, transient ischemic attack and renal failure measured 30 days postoperatively. Population consisted of 28 surgical subjects (14 in conventional group and 14 in EGHEM group). Mean subject age was 73.4 ± 6.7 years (36% male) in conventional group and 65.9 ± 14.4 years (36% male) in EGHEM group. Procedures included orthopedic (conventional = 29%, EGHEM 36%), general (conventional = 50%, EGHEM = 36%), vascular (conventional = 7%, EGHEM = 21%), and thoracic (conventional = 14%, EGHEM = 7%). There was no statistically significant difference in adverse clinical events between the 2 groups. The EGHEM group had less CHF, atrial fibrillation, and shorter length of stay. Echocardiography-guided hemodynamic management of patients with LVDD during surgery is feasible and may be a safe alternative to conventional management. © 2014, Wiley Periodicals, Inc.

  18. Long-term spatial distributions and trends of the latent heat fluxes over the global cropland ecosystem using multiple satellite-based models

    PubMed Central

    Feng, Fei; Yao, Yunjun; Liu, Meng

    2017-01-01

    Estimating cropland latent heat flux (LE) from continental to global scales is vital to modeling crop production and managing water resources. Over the past several decades, numerous LE models were developed, such as the moderate resolution imaging spectroradiometer LE (MOD16) algorithm, revised remote sensing-based Penman–Monteith LE algorithm (RRS), the Priestley–Taylor LE algorithm of the Jet Propulsion Laboratory (PT-JPL) and the modified satellite-based Priestley-Taylor LE algorithm (MS-PT). However, these LE models have not been directly compared over the global cropland ecosystem using various algorithms. In this study, we evaluated the performances of these four LE models using 34 eddy covariance (EC) sites. The results showed that mean annual LE for cropland varied from 33.49 to 58.97 W/m2 among the four models. The interannual LE slightly increased during 1982–2009 across the global cropland ecosystem. All models had acceptable performances with the coefficient of determination (R2) ranging from 0.4 to 0.7 and a root mean squared error (RMSE) of approximately 35 W/m2. MS-PT had good overall performance across the cropland ecosystem with the highest R2, lowest RMSE and a relatively low bias. The reduced performances of MOD16 and RRS, with R2 ranging from 0.4 to 0.6 and RMSEs from 30 to 39 W/m2, might be attributed to empirical parameters in the structure algorithms and calibrated coefficients. PMID:28837704

  19. Cooperative Management of a Lithium-Ion Battery Energy Storage Network: A Distributed MPC Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Huazhen; Wu, Di; Yang, Tao

    2016-12-12

    This paper presents a study of cooperative power supply and storage for a network of Lithium-ion energy storage systems (LiBESSs). We propose to develop a distributed model predictive control (MPC) approach for two reasons. First, able to account for the practical constraints of a LiBESS, the MPC can enable a constraint-aware operation. Second, a distributed management can cope with a complex network that integrates a large number of LiBESSs over a complex communication topology. With this motivation, we then build a fully distributed MPC algorithm from an optimization perspective, which is based on an extension of the alternating direction methodmore » of multipliers (ADMM) method. A simulation example is provided to demonstrate the effectiveness of the proposed algorithm.« less

  20. Colon and rectal injuries.

    PubMed

    Cleary, Robert K; Pomerantz, Richard A; Lampman, Richard M

    2006-08-01

    This study was designed to develop treatment algorithms for colon, rectal, and anal injuries based on the review of relevant literature. Information was obtained through a MEDLINE ( www.nobi.nih.gov/entrez/query.fcgi ) search, and additional references were obtained through cross-referencing key articles cited in these papers. A total of 203 articles were considered relevant. The management of penetrating and blunt colon, rectal, and anal injuries has evolved during the past 150 years. Since the World War II mandate to divert penetrating colon injuries, primary repair or resection and anastomosis have found an increasing role in patients with nondestructive injuries. A critical review of recent literature better defines the role of primary repair and fecal diversion for these injuries and allows for better algorithms for the management of these injuries.

  1. A dynamic replication management strategy in distributed GIS

    NASA Astrophysics Data System (ADS)

    Pan, Shaoming; Xiong, Lian; Xu, Zhengquan; Chong, Yanwen; Meng, Qingxiang

    2018-03-01

    Replication strategy is one of effective solutions to meet the requirement of service response time by preparing data in advance to avoid the delay of reading data from disks. This paper presents a brand-new method to create copies considering the selection of replicas set, the number of copies for each replica and the placement strategy of all copies. First, the popularities of all data are computed considering both the historical access records and the timeliness of the records. Then, replica set can be selected based on their recent popularities. Also, an enhanced Q-value scheme is proposed to assign the number of copies for each replica. Finally, a reasonable copies placement strategy is designed to meet the requirement of load balance. In addition, we present several experiments that compare the proposed method with techniques that use other replication management strategies. The results show that the proposed model has better performance than other algorithms in all respects. Moreover, the experiments based on different parameters also demonstrated the effectiveness and adaptability of the proposed algorithm.

  2. ASTAR Flight Test: Overview and Spacing Results

    NASA Technical Reports Server (NTRS)

    Roper, Roy D.; Koch, Michael R.

    2016-01-01

    The purpose of the NASA Langley Airborne Spacing for Terminal Arrival Routes (ASTAR) research aboard the Boeing ecoDemonstrator aircraft was to demonstrate the use of NASA's ASTAR algorithm using contemporary tools of the Federal Aviation Administration's Next Generation Air Transportation System (NEXTGEN). EcoDemonstrator is a Boeing test program which utilizes advanced experimental equipment to accelerate the science of aerospace and environmentally friendly technologies. The ASTAR Flight Test provided a proof-of-concept flight demonstration that exercised an algorithmic-based application in an actual aircraft. The test aircraft conducted Interval Management operations to provide time-based spacing off a target aircraft in non-simulator wind conditions. Work was conducted as a joint effort between NASA and Boeing to integrate ASTAR in a Boeing supplied B787 test aircraft while using a T-38 aircraft as the target. This demonstration was also used to identify operational risks to future flight trials for the NASA Air Traffic Management Technology Demonstration expected in 2017.

  3. Feasibility of nurse-led antidepressant medication management of depression in an HIV clinic in Tanzania.

    PubMed

    Adams, Julie L; Almond, Maria L G; Ringo, Edward J; Shangali, Wahida H; Sikkema, Kathleen J

    2012-01-01

    Sub-Saharan Africa has the highest HIV prevalence worldwide and depression is highly prevalent among those infected. The negative impact of depression on HIV outcomes highlights the need to identify and treat it in this population. A model for doing this in lower-resourced settings involves task-shifting depression treatment to primary care; however, HIV-infected individuals are often treated in a parallel HIV specialty setting. We adapted a model of task-shifting, measurement-based care (MBC), for an HIV clinic setting and tested its feasibility in Tanzania. MBC involves measuring depressive symptoms at meaningful intervals and adjusting antidepressant medication treatment based on the measure of illness. Twenty adults presenting for care at an outpatient HIV clinic in Tanzania were enrolled and followed by a nurse care manager who measured depressive symptoms at baseline and every 4 weeks for 12 weeks. An algorithm-based decision-support tool was utilized by the care manager to recommend individualized antidepressant medication doses to participants' HIV providers at each visit. Retention was high and fidelity of the care manager to the MBC protocol was exceptional. Follow through of antidepressant prescription dosing recommendations by the prescriber was low. Limited availability of antidepressants was also noted. Despite challenges, baseline depression scores decreased over the 12-week period. Overall, the model of algorithm-based nursing support of prescription decisions was feasible. Future studies should address implementation issues of medication supply and dosing. Further task-shifting to relatively more abundant and lower-skilled health workers, such as nurses' aides, warrants examination.

  4. Energy-Efficient BOP-Based Beacon Transmission Scheduling in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Kim, Eui-Jik; Youm, Sungkwan; Choi, Hyo-Hyun

    Many applications in wireless sensor networks (WSNs) require the energy efficiency and scalability. Although IEEE 802.15.4/Zigbee which is being considered as general technology for WSNs enables the low duty-cycling with time synchronization of all the nodes in network, it still suffer from its low scalability due to the beacon frame collision. Recently, various algorithms to resolve this problem are proposed. However, their manners to implement are somewhat ambiguous and the degradation of energy/communication efficiency is serious by the additional overhead. This paper describes an Energy-efficient BOP-based Beacon transmission Scheduling (EBBS) algorithm. EBBS is the centralized approach, in which a resource-sufficient node called as Topology Management Center (TMC) allocates the time slots to transmit a beacon frame to the nodes and manages the active/sleep schedules of them. We also propose EBBS with Adaptive BOPL (EBBS-AB), to adjust the duration to transmit beacon frames in every beacon interval, adaptively. Simulation results show that by using the proposed algorithm, the energy efficiency and the throughput of whole network can be significantly improved. EBBS-AB is also more effective for the network performance when the nodes are uniformly deployed on the sensor field rather than the case of random topologies.

  5. Outcomes using exhaled nitric oxide measurements as an adjunct to primary care asthma management.

    PubMed

    Hewitt, Richard S; Modrich, Catherine M; Cowan, Jan O; Herbison, G Peter; Taylor, D Robin

    2009-12-01

    Exhaled nitric oxide (FENO) measurements may help to highlight when inhaled corticosteroid (ICS) therapy should or should not be adjusted in asthma. This is often difficult to judge. Our aim was to evaluate a decision-support algorithm incorporating FENO measurements in a nurse-led asthma clinic. Asthma management was guided by an algorithm based on high (>45ppb), intermediate (30-45ppb), or low (<30ppb) FENO levels and asthma control status. This provided for one of eight possible treatment options, including diagnosis review and ICS dose adjustment. Well controlled asthma increased from 41% at visit 1 to 68% at visit 5 (p=0.001). The mean fluticasone dose decreased from 312 mcg/day at visit 2 to 211mcg/day at visit 5 (p=0.022). There was a high level of protocol deviations (25%), often related to concerns about reducing the ICS dose. The % fall in FENO associated with a change in asthma status from poor control to good control was 35%. An FENO-based algorithm provided for a reduction in ICS doses without compromising asthma control. However, the results may have been influenced by the education and support which patients received. Reluctance to reduce ICS dose was an issue which may have influenced the overall results. Australian Clinical Trials Registry # 012605000354684.

  6. Imaging in children with unilateral ureteropelvic junction obstruction: time to reduce investigations?

    PubMed

    Abadir, Nadin; Schmidt, Maria; Laube, Guido F; Weitz, Marcus

    2017-09-01

    The objective of the study was the development of an abridged risk-stratified imaging algorithm for the management of children with unilateral ureteropelvic junction obstruction (UPJO). Data on timing, frequency and duration of diagnostic imaging in children with unilateral UPJO was extracted retrospectively. Based on these findings, an abridged imaging algorithm was developed without changing the intended management by the clinicians and the outcome of the individual patient. The potential reduction of imaging studies was analysed and stratified by risk and management groups. The reduction in imaging studies, seen for ultrasound (US) and functional imaging (FI), was 45% each. On average, this is equivalent to 3 US and 1 FI studies less for every patient within the study period. The change was more pronounced in the low-risk groups. Progression of UPJO never occurred after 2 years of age and all secondary surgeries were carried out until the age of 3. Although our findings need to be validated by further prospective research, the developed imaging algorithm represents a risk-stratified approach towards less imaging studies in children with unilateral UPJO, and a follow-up beyond 3 years of age should be considered only in selected cases at the discretion of the clinician. What is Known: • ultrasound and functional imaging represent an integral part of therapeutic decision-making in children with unilateral ureteropelvic junction obstruction • imaging studies cannot accurately assess which patients are in need of surgical intervention, therefore close, serial imaging is preferred What is New: • a new, risk-stratified imaging algorithm was developed for the first 3 years of life • applying this algorithm could lead to a considerable reduction of imaging studies, and also the associated risks and health-care costs.

  7. Comparison of genetic algorithm methods for fuel management optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1995-12-31

    The CIGARO system was developed for genetic algorithm fuel management optimization. Tests are performed to find the best fuel location swap mutation operator probability and to compare genetic algorithm to a truly random search method. Tests showed the fuel swap probability should be between 0% and 10%, and a 50% definitely hampered the optimization. The genetic algorithm performed significantly better than the random search method, which did not even satisfy the peak normalized power constraint.

  8. Evidence-Based Evaluation And Management Of Patients With Pharyngitis In The Emergency Department.

    PubMed

    Hildreth, Amy F; Takhar, Sukhjit; Clark, Mark Andrew; Hatten, Benjamin

    2015-09-01

    Pharyngitis is a common presentation, but it can also be associated with life-threatening processes, including sepsis and airway compromise. Other conditions, such as thyroid disease and cardiac disease, may mimic pharyngitis. The emergency clinician must sort through the broad differential for this complaint using a systematic approach that protects against early closure of the diagnosis. This issue reviews the various international guidelines for pharyngitis and notes controversies in diagnostic and treatment strategies, specifically for management of suspected bacterial, viral, and fungal etiology. A management algorithm is presented, with recommendations based on a review of the best available evidence, taking into account patient comfort and outcomes, the need to reduce bacterial resistance, and costs.

  9. Biometric identity management for standard mobile medical networks.

    PubMed

    Egner, Alexandru; Soceanu, Alexandru; Moldoveanu, Florica

    2012-01-01

    The explosion of healthcare costs over the last decade has prompted the ICT industry to respond with solutions for reducing costs while improving healthcare quality. The ISO/IEEE 11073 family of standards recently released is the first step towards interoperability of mobile medical devices used in patient environments. The standards do not, however, tackle security problems, such as identity management, or the secure exchange of medical data. This paper proposes an enhancement of the ISO/IEEE 11073-20601 protocol with an identity management system based on biometry. The paper describes a novel biometric-based authentication process, together with the biometric key generation algorithm. The proposed extension of the ISO/IEEE 11073-20601 is also presented.

  10. 2005 AG20/20 Annual Review

    NASA Technical Reports Server (NTRS)

    Ross, Kenton W.; McKellip, Rodney D.

    2005-01-01

    Topics covered include: Implementation and Validation of Sensor-Based Site-Specific Crop Management; Enhanced Management of Agricultural Perennial Systems (EMAPS) Using GIS and Remote Sensing; Validation and Application of Geospatial Information for Early Identification of Stress in Wheat; Adapting and Validating Precision Technologies for Cotton Production in the Mid-Southern United States - 2004 Progress Report; Development of a System to Automatically Geo-Rectify Images; Economics of Precision Agriculture Technologies in Cotton Production-AG 2020 Prescription Farming Automation Algorithms; Field Testing a Sensor-Based Applicator for Nitrogen and Phosphorus Application; Early Detection of Citrus Diseases Using Machine Vision and DGPS; Remote Sensing of Citrus Tree Stress Levels and Factors; Spectral-based Nitrogen Sensing for Citrus; Characterization of Tree Canopies; In-field Sensing of Shallow Water Tables and Hydromorphic Soils with an Electromagnetic Induction Profiler; Maintaining the Competitiveness of Tree Fruit Production Through Precision Agriculture; Modeling and Visualizing Terrain and Remote Sensing Data for Research and Education in Precision Agriculture; Thematic Soil Mapping and Crop-Based Strategies for Site-Specific Management; and Crop-Based Strategies for Site-Specific Management.

  11. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  12. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    NASA Astrophysics Data System (ADS)

    Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir

    2018-03-01

    The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  13. A Discussion on Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.

  14. Derivation and validation of the Personal Support Algorithm: an evidence-based framework to inform allocation of personal support services in home and community care.

    PubMed

    Sinn, Chi-Ling Joanna; Jones, Aaron; McMullan, Janet Legge; Ackerman, Nancy; Curtin-Telegdi, Nancy; Eckel, Leslie; Hirdes, John P

    2017-11-25

    Personal support services enable many individuals to stay in their homes, but there are no standard ways to classify need for functional support in home and community care settings. The goal of this project was to develop an evidence-based clinical tool to inform service planning while allowing for flexibility in care coordinator judgment in response to patient and family circumstances. The sample included 128,169 Ontario home care patients assessed in 2013 and 25,800 Ontario community support clients assessed between 2014 and 2016. Independent variables were drawn from the Resident Assessment Instrument-Home Care and interRAI Community Health Assessment that are standardised, comprehensive, and fully compatible clinical assessments. Clinical expertise and regression analyses identified candidate variables that were entered into decision tree models. The primary dependent variable was the weekly hours of personal support calculated based on the record of billed services. The Personal Support Algorithm classified need for personal support into six groups with a 32-fold difference in average billed hours of personal support services between the highest and lowest group. The algorithm explained 30.8% of the variability in billed personal support services. Care coordinators and managers reported that the guidelines based on the algorithm classification were consistent with their clinical judgment and current practice. The Personal Support Algorithm provides a structured yet flexible decision-support framework that may facilitate a more transparent and equitable approach to the allocation of personal support services.

  15. Simplified method for numerical modeling of fiber lasers.

    PubMed

    Shtyrina, O V; Yarutkina, I A; Fedoruk, M P

    2014-12-29

    A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.

  16. Ventricular repolarization variability for hypoglycemia detection.

    PubMed

    Ling, Steve; Nguyen, H T

    2011-01-01

    Hypoglycemia is the most acute and common complication of Type 1 diabetes and is a limiting factor in a glycemic management of diabetes. In this paper, two main contributions are presented; firstly, ventricular repolarization variabilities are introduced for hypoglycemia detection, and secondly, a swarm-based support vector machine (SVM) algorithm with the inputs of the repolarization variabilities is developed to detect hypoglycemia. By using the algorithm and including several repolarization variabilities as inputs, the best hypoglycemia detection performance is found with sensitivity and specificity of 82.14% and 60.19%, respectively.

  17. Influence of pansharpening techniques in obtaining accurate vegetation thematic maps

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier

    2016-10-01

    In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.

  18. An ontology-based nurse call management system (oNCS) with probabilistic priority assessment

    PubMed Central

    2011-01-01

    Background The current, place-oriented nurse call systems are very static. A patient can only make calls with a button which is fixed to a wall of a room. Moreover, the system does not take into account various factors specific to a situation. In the future, there will be an evolution to a mobile button for each patient so that they can walk around freely and still make calls. The system would become person-oriented and the available context information should be taken into account to assign the correct nurse to a call. The aim of this research is (1) the design of a software platform that supports the transition to mobile and wireless nurse call buttons in hospitals and residential care and (2) the design of a sophisticated nurse call algorithm. This algorithm dynamically adapts to the situation at hand by taking the profile information of staff members and patients into account. Additionally, the priority of a call probabilistically depends on the risk factors, assigned to a patient. Methods The ontology-based Nurse Call System (oNCS) was developed as an extension of a Context-Aware Service Platform. An ontology is used to manage the profile information. Rules implement the novel nurse call algorithm that takes all this information into account. Probabilistic reasoning algorithms are designed to determine the priority of a call based on the risk factors of the patient. Results The oNCS system is evaluated through a prototype implementation and simulations, based on a detailed dataset obtained from Ghent University Hospital. The arrival times of nurses at the location of a call, the workload distribution of calls amongst nurses and the assignment of priorities to calls are compared for the oNCS system and the current, place-oriented nurse call system. Additionally, the performance of the system is discussed. Conclusions The execution time of the nurse call algorithm is on average 50.333 ms. Moreover, the oNCS system significantly improves the assignment of nurses to calls. Calls generally have a nurse present faster and the workload-distribution amongst the nurses improves. PMID:21294860

  19. A Survey on Underwater Acoustic Sensor Network Routing Protocols.

    PubMed

    Li, Ning; Martínez, José-Fernán; Meneses Chaus, Juan Manuel; Eckert, Martina

    2016-03-22

    Underwater acoustic sensor networks (UASNs) have become more and more important in ocean exploration applications, such as ocean monitoring, pollution detection, ocean resource management, underwater device maintenance, etc. In underwater acoustic sensor networks, since the routing protocol guarantees reliable and effective data transmission from the source node to the destination node, routing protocol design is an attractive topic for researchers. There are many routing algorithms have been proposed in recent years. To present the current state of development of UASN routing protocols, we review herein the UASN routing protocol designs reported in recent years. In this paper, all the routing protocols have been classified into different groups according to their characteristics and routing algorithms, such as the non-cross-layer design routing protocol, the traditional cross-layer design routing protocol, and the intelligent algorithm based routing protocol. This is also the first paper that introduces intelligent algorithm-based UASN routing protocols. In addition, in this paper, we investigate the development trends of UASN routing protocols, which can provide researchers with clear and direct insights for further research.

  20. Coverage maximization under resource constraints using a nonuniform proliferating random walk.

    PubMed

    Saha, Sudipta; Ganguly, Niloy

    2013-02-01

    Information management services on networks, such as search and dissemination, play a key role in any large-scale distributed system. One of the most desirable features of these services is the maximization of the coverage, i.e., the number of distinctly visited nodes under constraints of network resources as well as time. However, redundant visits of nodes by different message packets (modeled, e.g., as walkers) initiated by the underlying algorithms for these services cause wastage of network resources. In this work, using results from analytical studies done in the past on a K-random-walk-based algorithm, we identify that redundancy quickly increases with an increase in the density of the walkers. Based on this postulate, we design a very simple distributed algorithm which dynamically estimates the density of the walkers and thereby carefully proliferates walkers in sparse regions. We use extensive computer simulations to test our algorithm in various kinds of network topologies whereby we find it to be performing particularly well in networks that are highly clustered as well as sparse.

  1. A Survey on Underwater Acoustic Sensor Network Routing Protocols

    PubMed Central

    Li, Ning; Martínez, José-Fernán; Meneses Chaus, Juan Manuel; Eckert, Martina

    2016-01-01

    Underwater acoustic sensor networks (UASNs) have become more and more important in ocean exploration applications, such as ocean monitoring, pollution detection, ocean resource management, underwater device maintenance, etc. In underwater acoustic sensor networks, since the routing protocol guarantees reliable and effective data transmission from the source node to the destination node, routing protocol design is an attractive topic for researchers. There are many routing algorithms have been proposed in recent years. To present the current state of development of UASN routing protocols, we review herein the UASN routing protocol designs reported in recent years. In this paper, all the routing protocols have been classified into different groups according to their characteristics and routing algorithms, such as the non-cross-layer design routing protocol, the traditional cross-layer design routing protocol, and the intelligent algorithm based routing protocol. This is also the first paper that introduces intelligent algorithm-based UASN routing protocols. In addition, in this paper, we investigate the development trends of UASN routing protocols, which can provide researchers with clear and direct insights for further research. PMID:27011193

  2. Prediction-based Dynamic Energy Management in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei

    2007-01-01

    Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.

  3. Technical note: Efficient online source identification algorithm for integration within a contamination event management system

    NASA Astrophysics Data System (ADS)

    Deuerlein, Jochen; Meyer-Harries, Lea; Guth, Nicolai

    2017-07-01

    Drinking water distribution networks are part of critical infrastructures and are exposed to a number of different risks. One of them is the risk of unintended or deliberate contamination of the drinking water within the pipe network. Over the past decade research has focused on the development of new sensors that are able to detect malicious substances in the network and early warning systems for contamination. In addition to the optimal placement of sensors, the automatic identification of the source of a contamination is an important component of an early warning and event management system for security enhancement of water supply networks. Many publications deal with the algorithmic development; however, only little information exists about the integration within a comprehensive real-time event detection and management system. In the following the analytical solution and the software implementation of a real-time source identification module and its integration within a web-based event management system are described. The development was part of the SAFEWATER project, which was funded under FP 7 of the European Commission.

  4. Using evidence-based medicine to protect healthcare workers from pandemic influenza: Is it possible?

    PubMed

    Gralton, Jan; McLaws, Mary-Louise

    2011-01-01

    To use evidence-based principles to develop infection control algorithms to ensure the protection of healthcare workers and the continuity of health service provision during a pandemic. : Evidence-based algorithms were developed from published research as well as "needs and values" assessments. Research evidence was obtained from 97 studies reporting the protectiveness of antiviral prophylaxis, seasonal vaccination, and mask use. Needs and values assessments were undertaken by international experts in pandemic infection control and local healthcare workers. Opportunity and resources costs were not determined. The Australian government commissioned the development of an evidence-based algorithm for inclusion in the 2008 revision of the Australian Health and Management Plan for Pandemic Influenza. Two international infection control teams responsible for healthcare worker safety during the Severe Acute Respiratory Syndrome outbreak reviewed the evidence-based algorithms. The algorithms were then reviewed for needs and values by eight local clinicians who were considered key frontline clinicians during the contain and sustain phases. The international teams reviewed for practicability of implementation, whereas local clinicians reviewed for clinician compliance. Despite strong evidence for vaccination and antiviral prophylaxis providing significant protection, clinicians believed they required the additional combinations of both masks and face shields. Despite the equivocal evidence for the efficacy of surgical and N95 masks and the provision of algorithms appropriate for the level of risk according to clinical care during a pandemic, clinicians still demanded N95 masks plus face shields in combination with prophylaxis and novel vaccination. Conventional evidence-based principles could not be applied to formulate recommendations due to the lack of pandemic-specific efficacy data of protection tools and the inherent unpredictability of pandemics. As an alternative, evidence-based principles have been used to formulate recommendations while giving priority to the needs and values of healthcare workers over the research evidence.

  5. DEVELOPMENT AND TESTING OF FAULT-DIAGNOSIS ALGORITHMS FOR REACTOR PLANT SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grelle, Austin L.; Park, Young S.; Vilim, Richard B.

    Argonne National Laboratory is further developing fault diagnosis algorithms for use by the operator of a nuclear plant to aid in improved monitoring of overall plant condition and performance. The objective is better management of plant upsets through more timely, informed decisions on control actions with the ultimate goal of improved plant safety, production, and cost management. Integration of these algorithms with visual aids for operators is taking place through a collaboration under the concept of an operator advisory system. This is a software entity whose purpose is to manage and distill the enormous amount of information an operator mustmore » process to understand the plant state, particularly in off-normal situations, and how the state trajectory will unfold in time. The fault diagnosis algorithms were exhaustively tested using computer simulations of twenty different faults introduced into the chemical and volume control system (CVCS) of a pressurized water reactor (PWR). The algorithms are unique in that each new application to a facility requires providing only the piping and instrumentation diagram (PID) and no other plant-specific information; a subject-matter expert is not needed to install and maintain each instance of an application. The testing approach followed accepted procedures for verifying and validating software. It was shown that the code satisfies its functional requirement which is to accept sensor information, identify process variable trends based on this sensor information, and then to return an accurate diagnosis based on chains of rules related to these trends. The validation and verification exercise made use of GPASS, a one-dimensional systems code, for simulating CVCS operation. Plant components were failed and the code generated the resulting plant response. Parametric studies with respect to the severity of the fault, the richness of the plant sensor set, and the accuracy of sensors were performed as part of the validation exercise. The background and overview of the software will be presented to give an overview of the approach. Following, the verification and validation effort using the GPASS code for simulation of plant transients including a sensitivity study on important parameters will be presented« less

  6. Improving healthcare services using web based platform for management of medical case studies.

    PubMed

    Ogescu, Cristina; Plaisanu, Claudiu; Udrescu, Florian; Dumitru, Silviu

    2008-01-01

    The paper presents a web based platform for management of medical cases, support for healthcare specialists in taking the best clinical decision. Research has been oriented mostly on multimedia data management, classification algorithms for querying, retrieving and processing different medical data types (text and images). The medical case studies can be accessed by healthcare specialists and by students as anonymous case studies providing trust and confidentiality in Internet virtual environment. The MIDAS platform develops an intelligent framework to manage sets of medical data (text, static or dynamic images), in order to optimize the diagnosis and the decision process, which will reduce the medical errors and will increase the quality of medical act. MIDAS is an integrated project working on medical information retrieval from heterogeneous, distributed medical multimedia database.

  7. Advanced order management in ERM systems: the tic-tac-toe algorithm

    NASA Astrophysics Data System (ADS)

    Badell, Mariana; Fernandez, Elena; Puigjaner, Luis

    2000-10-01

    The concept behind improved enterprise resource planning systems (ERP) systems is the overall integration of the whole enterprise functionality into the management systems through financial links. Converting current software into real management decision tools requires crucial changes in the current approach to ERP systems. This evolution must be able to incorporate the technological achievements both properly and in time. The exploitation phase of plants needs an open web-based environment for collaborative business-engineering with on-line schedulers. Today's short lifecycles of products and processes require sharp and finely tuned management actions that must be guided by scheduling tools. Additionally, such actions must be able to keep track of money movements related to supply chain events. Thus, the necessary outputs require financial-production integration at the scheduling level as proposed in the new approach of enterprise management systems (ERM). Within this framework, the economical analysis of the due date policy and its optimization become essential to manage dynamically realistic and optimal delivery dates with price-time trade-off during the marketing activities. In this work we propose a scheduling tool with web-based interface conducted by autonomous agents when precise economic information relative to plant and business actions and their effects are provided. It aims to attain a better arrangement of the marketing and production events in order to face the bid/bargain process during e-commerce. Additionally, management systems require real time execution and an efficient transaction-oriented approach capable to dynamically adopt realistic and optimal actions to support marketing management. To this end the TicTacToe algorithm provides sequence optimization with acceptable tolerances in realistic time.

  8. Evaluation of the Effect of Diagnostic Molecular Testing on the Surgical Decision-Making Process for Patients With Thyroid Nodules.

    PubMed

    Noureldine, Salem I; Najafian, Alireza; Aragon Han, Patricia; Olson, Matthew T; Genther, Dane J; Schneider, Eric B; Prescott, Jason D; Agrawal, Nishant; Mathur, Aarti; Zeiger, Martha A; Tufano, Ralph P

    2016-07-01

    Diagnostic molecular testing is used in the workup of thyroid nodules. While these tests appear to be promising in more definitively assigning a risk of malignancy, their effect on surgical decision making has yet to be demonstrated. To investigate the effect of diagnostic molecular profiling of thyroid nodules on the surgical decision-making process. A surgical management algorithm was developed and published after peer review that incorporated individual Bethesda System for Reporting Thyroid Cytopathology classifications with clinical, laboratory, and radiological results. This algorithm was created to formalize the decision-making process selected herein in managing patients with thyroid nodules. Between April 1, 2014, and March 31, 2015, a prospective study of patients who had undergone diagnostic molecular testing of a thyroid nodule before being seen for surgical consultation was performed. The recommended management undertaken by the surgeon was then prospectively compared with the corresponding one in the algorithm. Patients with thyroid nodules who did not undergo molecular testing and were seen for surgical consultation during the same period served as a control group. All pertinent treatment options were presented to each patient, and any deviation from the algorithm was recorded prospectively. To evaluate the appropriateness of any change (deviation) in management, the surgical histopathology diagnosis was correlated with the surgery performed. The study cohort comprised 140 patients who underwent molecular testing. Their mean (SD) age was 50.3 (14.6) years, and 75.0% (105 of 140) were female. Over a 1-year period, 20.3% (140 of 688) had undergone diagnostic molecular testing before surgical consultation, and 79.7% (548 of 688) had not undergone molecular testing. The surgical management deviated from the treatment algorithm in 12.9% (18 of 140) with molecular testing and in 10.2% (56 of 548) without molecular testing (P = .37). In the group with molecular testing, the surgical management plan of only 7.9% (11 of 140) was altered as a result of the molecular test. All but 1 of those patients were found to be overtreated relative to the surgical histopathology analysis. Molecular testing did not significantly affect the surgical decision-making process in this study. Among patients whose treatment was altered based on these markers, there was evidence of overtreatment.

  9. An Algorithm for Neuropathic Pain Management in Older People.

    PubMed

    Pickering, Gisèle; Marcoux, Margaux; Chapiro, Sylvie; David, Laurence; Rat, Patrice; Michel, Micheline; Bertrand, Isabelle; Voute, Marion; Wary, Bernard

    2016-08-01

    Neuropathic pain frequently affects older people, who generally also have several comorbidities. Elderly patients are often poly-medicated, which increases the risk of drug-drug interactions. These patients, especially those with cognitive problems, may also have restricted communication skills, making pain evaluation difficult and pain treatment challenging. Clinicians and other healthcare providers need a decisional algorithm to optimize the recognition and management of neuropathic pain. We present a decisional algorithm developed by a multidisciplinary group of experts, which focuses on pain assessment and therapeutic options for the management of neuropathic pain, particularly in the elderly. The algorithm involves four main steps: (1) detection, (2) evaluation, (3) treatment, and (4) re-evaluation. The detection of neuropathic pain is an essential step in ensuring successful management. The extent of the impact of the neuropathic pain is then assessed, generally with self-report scales, except in patients with communication difficulties who can be assessed using behavioral scales. The management of neuropathic pain frequently requires combination treatments, and recommended treatments should be prescribed with caution in these elderly patients, taking into consideration their comorbidities and potential drug-drug interactions and adverse events. This algorithm can be used in the management of neuropathic pain in the elderly to ensure timely and adequate treatment by a multidisciplinary team.

  10. Heart failure analysis dashboard for patient's remote monitoring combining multiple artificial intelligence technologies.

    PubMed

    Guidi, G; Pettenati, M C; Miniati, R; Iadanza, E

    2012-01-01

    In this paper we describe an Heart Failure analysis Dashboard that, combined with a handy device for the automatic acquisition of a set of patient's clinical parameters, allows to support telemonitoring functions. The Dashboard's intelligent core is a Computer Decision Support System designed to assist the clinical decision of non-specialist caring personnel, and it is based on three functional parts: Diagnosis, Prognosis, and Follow-up management. Four Artificial Intelligence-based techniques are compared for providing diagnosis function: a Neural Network, a Support Vector Machine, a Classification Tree and a Fuzzy Expert System whose rules are produced by a Genetic Algorithm. State of the art algorithms are used to support a score-based prognosis function. The patient's Follow-up is used to refine the diagnosis.

  11. Big data mining analysis method based on cloud computing

    NASA Astrophysics Data System (ADS)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  12. An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.

    2014-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.

  13. Modeling in the State Flow Environment to Support Launch Vehicle Verification Testing for Mission and Fault Management Algorithms in the NASA Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Berg, Peter; England, Dwight; Johnson, Stephen B.

    2016-01-01

    Analysis methods and testing processes are essential activities in the engineering development and verification of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS). Central to mission success is reliable verification of the Mission and Fault Management (M&FM) algorithms for the SLS launch vehicle (LV) flight software. This is particularly difficult because M&FM algorithms integrate and operate LV subsystems, which consist of diverse forms of hardware and software themselves, with equally diverse integration from the engineering disciplines of LV subsystems. M&FM operation of SLS requires a changing mix of LV automation. During pre-launch the LV is primarily operated by the Kennedy Space Center (KSC) Ground Systems Development and Operations (GSDO) organization with some LV automation of time-critical functions, and much more autonomous LV operations during ascent that have crucial interactions with the Orion crew capsule, its astronauts, and with mission controllers at the Johnson Space Center. M&FM algorithms must perform all nominal mission commanding via the flight computer to control LV states from pre-launch through disposal and also address failure conditions by initiating autonomous or commanded aborts (crew capsule escape from the failing LV), redundancy management of failing subsystems and components, and safing actions to reduce or prevent threats to ground systems and crew. To address the criticality of the verification testing of these algorithms, the NASA M&FM team has utilized the State Flow environment6 (SFE) with its existing Vehicle Management End-to-End Testbed (VMET) platform which also hosts vendor-supplied physics-based LV subsystem models. The human-derived M&FM algorithms are designed and vetted in Integrated Development Teams composed of design and development disciplines such as Systems Engineering, Flight Software (FSW), Safety and Mission Assurance (S&MA) and major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GN&C), Thrust Vector Control (TVC), liquid engines, and the astronaut crew office. Since the algorithms are realized using model-based engineering (MBE) methods from a hybrid of the Unified Modeling Language (UML) and Systems Modeling Language (SysML), SFE methods are a natural fit to provide an in depth analysis of the interactive behavior of these algorithms with the SLS LV subsystem models. For this, the M&FM algorithms and the SLS LV subsystem models are modeled using constructs provided by Matlab which also enables modeling of the accompanying interfaces providing greater flexibility for integrated testing and analysis, which helps forecast expected behavior in forward VMET integrated testing activities. In VMET, the M&FM algorithms are prototyped and implemented using the same C++ programming language and similar state machine architectural concepts used by the FSW group. Due to the interactive complexity of the algorithms, VMET testing thus far has verified all the individual M&FM subsystem algorithms with select subsystem vendor models but is steadily progressing to assessing the interactive behavior of these algorithms with LV subsystems, as represented by subsystem models. The novel SFE applications has proven to be useful for quick look analysis into early integrated system behavior and assessment of the M&FM algorithms with the modeled LV subsystems. This early MBE analysis generates vital insight into the integrated system behaviors, algorithm sensitivities, design issues, and has aided in the debugging of the M&FM algorithms well before full testing can begin in more expensive, higher fidelity but more arduous environments such as VMET, FSW testing, and the Systems Integration Lab7 (SIL). SFE has exhibited both expected and unexpected behaviors in nominal and off nominal test cases prior to full VMET testing. In many findings, these behavioral characteristics were used to correct the M&FM algorithms, enable better test coverage, and develop more effective test cases for each of the LV subsystems. This has improved the fidelity of testing and planning for the next generation of M&FM algorithms as the SLS program evolves from non-crewed to crewed flight, impacting subsystem configurations and the M&FM algorithms that control them. SFE analysis has improved robustness and reliability of the M&FM algorithms by revealing implementation errors and documentation inconsistencies. It is also improving planning efficiency for future VMET testing of the M&FM algorithms hosted in the LV flight computers, further reducing risk for the SLS launch infrastructure, the SLS LV, and most importantly the crew.

  14. How to Compute a Slot Marker - Calculation of Controller Managed Spacing Tools for Efficient Descents with Precision Scheduling

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas

    2012-01-01

    This paper describes the underlying principles and algorithms for computing the primary controller managed spacing (CMS) tools developed at NASA for precisely spacing aircraft along efficient descent paths. The trajectory-based CMS tools include slot markers, delay indications and speed advisories. These tools are one of three core NASA technologies integrated in NASAs ATM technology demonstration-1 (ATD-1) that will operationally demonstrate the feasibility of fuel-efficient, high throughput arrival operations using Automatic Dependent Surveillance Broadcast (ADS-B) and ground-based and airborne NASA technologies for precision scheduling and spacing.

  15. Development of model reference adaptive control theory for electric power plant control applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mabius, L.E.

    1982-09-15

    The scope of this effort includes the theoretical development of a multi-input, multi-output (MIMO) Model Reference Control (MRC) algorithm, (i.e., model following control law), Model Reference Adaptive Control (MRAC) algorithm and the formulation of a nonlinear model of a typical electric power plant. Previous single-input, single-output MRAC algorithm designs have been generalized to MIMO MRAC designs using the MIMO MRC algorithm. This MRC algorithm, which has been developed using Command Generator Tracker methodologies, represents the steady state behavior (in the adaptive sense) of the MRAC algorithm. The MRC algorithm is a fundamental component in the MRAC design and stability analysis.more » An enhanced MRC algorithm, which has been developed for systems with more controls than regulated outputs, alleviates the MRC stability constraint of stable plant transmission zeroes. The nonlinear power plant model is based on the Cromby model with the addition of a governor valve management algorithm, turbine dynamics and turbine interactions with extraction flows. An application of the MRC algorithm to a linearization of this model demonstrates its applicability to power plant systems. In particular, the generated power changes at 7% per minute while throttle pressure and temperature, reheat temperature and drum level are held constant with a reasonable level of control. The enhanced algorithm reduces significantly control fluctuations without modifying the output response.« less

  16. ProvenCare-Psoriasis: A disease management model to optimize care.

    PubMed

    Gionfriddo, Michael R; Pulk, Rebecca A; Sahni, Dev R; Vijayanagar, Sonal G; Chronowski, Joseph J; Jones, Laney K; Evans, Michael A; Feldman, Steven R; Pride, Howard

    2018-03-15

    There are a variety of evidence-based treatments available for psoriasis. The transition of this evidence into practice is challenging. In this article, we describe the design of our disease management approach for Psoriasis (ProvenCare®) and present preliminary evidence of the effect of its implementation. In designing our approach, we identified three barriers to optimal care: 1) lack of a standardized and discrete disease activity measure within the electronic health record, 2) lack of a system-wide, standardized approach to care, and 3) non-uniform financial access to appropriate non-pharmacologic treatments. We implemented several solutions, which collectively form our approach. We standardized the documentation of clinical data such as body surface area (BSA), created a disease management algorithm for psoriasis, and aligned incentives to facilitate the implementation of the algorithm. This approach provides more coordinated, cost effective care for psoriasis, while being acceptable to key stakeholders. Future work will examine the effect of the implementation of our approach on important clinical and patient outcomes.

  17. An algorithm recommendation for the management of knee osteoarthritis in Europe and internationally: a report from a task force of the European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis (ESCEO).

    PubMed

    Bruyère, Olivier; Cooper, Cyrus; Pelletier, Jean-Pierre; Branco, Jaime; Luisa Brandi, Maria; Guillemin, Francis; Hochberg, Marc C; Kanis, John A; Kvien, Tore K; Martel-Pelletier, Johanne; Rizzoli, René; Silverman, Stuart; Reginster, Jean-Yves

    2014-12-01

    Existing practice guidelines for osteoarthritis (OA) analyze the evidence behind each proposed treatment but do not prioritize the interventions in a given sequence. The objective was to develop a treatment algorithm recommendation that is easier to interpret for the prescribing physician based on the available evidence and that is applicable in Europe and internationally. The knee was used as the model OA joint. ESCEO assembled a task force of 13 international experts (rheumatologists, clinical epidemiologists, and clinical scientists). Existing guidelines were reviewed; all interventions listed and recent evidence were retrieved using established databases. A first schematic flow chart with treatment prioritization was discussed in a 1-day meeting and shaped to the treatment algorithm. Fine-tuning occurred by electronic communication and three consultation rounds until consensus. Basic principles consist of the need for a combined pharmacological and non-pharmacological treatment with a core set of initial measures, including information access/education, weight loss if overweight, and an appropriate exercise program. Four multimodal steps are then established. Step 1 consists of background therapy, either non-pharmacological (referral to a physical therapist for re-alignment treatment if needed and sequential introduction of further physical interventions initially and at any time thereafter) or pharmacological. The latter consists of chronic Symptomatic Slow-Acting Drugs for OA (e.g., prescription glucosamine sulfate and/or chondroitin sulfate) with paracetamol at-need; topical NSAIDs are added in the still symptomatic patient. Step 2 consists of the advanced pharmacological management in the persistent symptomatic patient and is centered on the use of oral COX-2 selective or non-selective NSAIDs, chosen based on concomitant risk factors, with intra-articular corticosteroids or hyaluronate for further symptom relief if insufficient. In Step 3, the last pharmacological attempts before surgery are represented by weak opioids and other central analgesics. Finally, Step 4 consists of end-stage disease management and surgery, with classical opioids as a difficult-to-manage alternative when surgery is contraindicated. The proposed treatment algorithm may represent a new framework for the development of future guidelines for the management of OA, more easily accessible to physicians. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms

    PubMed Central

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2017-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237

  19. Intelligent error correction method applied on an active pixel sensor based star tracker

    NASA Astrophysics Data System (ADS)

    Schmidt, Uwe

    2005-10-01

    Star trackers are opto-electronic sensors used on-board of satellites for the autonomous inertial attitude determination. During the last years star trackers became more and more important in the field of the attitude and orbit control system (AOCS) sensors. High performance star trackers are based up today on charge coupled device (CCD) optical camera heads. The active pixel sensor (APS) technology, introduced in the early 90-ties, allows now the beneficial replacement of CCD detectors by APS detectors with respect to performance, reliability, power, mass and cost. The company's heritage in star tracker design started in the early 80-ties with the launch of the worldwide first fully autonomous star tracker system ASTRO1 to the Russian MIR space station. Jena-Optronik recently developed an active pixel sensor based autonomous star tracker "ASTRO APS" as successor of the CCD based star tracker product series ASTRO1, ASTRO5, ASTRO10 and ASTRO15. Key features of the APS detector technology are, a true xy-address random access, the multiple windowing read out and the on-chip signal processing including the analogue to digital conversion. These features can be used for robust star tracking at high slew rates and under worse conditions like stray light and solar flare induced single event upsets. A special algorithm have been developed to manage the typical APS detector error contributors like fixed pattern noise (FPN), dark signal non-uniformity (DSNU) and white spots. The algorithm works fully autonomous and adapts to e.g. increasing DSNU and up-coming white spots automatically without ground maintenance or re-calibration. In contrast to conventional correction methods the described algorithm does not need calibration data memory like full image sized calibration data sets. The application of the presented algorithm managing the typical APS detector error contributors is a key element for the design of star trackers for long term satellite applications like geostationary telecom platforms.

  20. Image matching as a data source for forest inventory - Comparison of Semi-Global Matching and Next-Generation Automatic Terrain Extraction algorithms in a typical managed boreal forest environment

    NASA Astrophysics Data System (ADS)

    Kukkonen, M.; Maltamo, M.; Packalen, P.

    2017-08-01

    Image matching is emerging as a compelling alternative to airborne laser scanning (ALS) as a data source for forest inventory and management. There is currently an open discussion in the forest inventory community about whether, and to what extent, the new method can be applied to practical inventory campaigns. This paper aims to contribute to this discussion by comparing two different image matching algorithms (Semi-Global Matching [SGM] and Next-Generation Automatic Terrain Extraction [NGATE]) and ALS in a typical managed boreal forest environment in southern Finland. Spectral features from unrectified aerial images were included in the modeling and the potential of image matching in areas without a high resolution digital terrain model (DTM) was also explored. Plot level predictions for total volume, stem number, basal area, height of basal area median tree and diameter of basal area median tree were modeled using an area-based approach. Plot level dominant tree species were predicted using a random forest algorithm, also using an area-based approach. The statistical difference between the error rates from different datasets was evaluated using a bootstrap method. Results showed that ALS outperformed image matching with every forest attribute, even when a high resolution DTM was used for height normalization and spectral information from images was included. Dominant tree species classification with image matching achieved accuracy levels similar to ALS regardless of the resolution of the DTM when spectral metrics were used. Neither of the image matching algorithms consistently outperformed the other, but there were noticeably different error rates depending on the parameter configuration, spectral band, resolution of DTM, or response variable. This study showed that image matching provides reasonable point cloud data for forest inventory purposes, especially when a high resolution DTM is available and information from the understory is redundant.

  1. I/O-Efficient Scientific Computation Using TPIE

    NASA Technical Reports Server (NTRS)

    Vengroff, Darren Erik; Vitter, Jeffrey Scott

    1996-01-01

    In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.

  2. A Simple Two Aircraft Conflict Resolution Algorithm

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.

    1999-01-01

    Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in the cockpit, dispatchers in operation control centers and air traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control imctions.This paper describes a conflict detection and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection and resolution method.

  3. Stochastic derivative-free optimization using a trust region framework

    DOE PAGES

    Larson, Jeffrey; Billups, Stephen C.

    2016-02-17

    This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less

  4. Dynamic bandwidth allocation based on multiservice in software-defined wavelength-division multiplexing time-division multiplexing passive optical network

    NASA Astrophysics Data System (ADS)

    Wang, Fu; Liu, Bo; Zhang, Lijia; Jin, Feifei; Zhang, Qi; Tian, Qinghua; Tian, Feng; Rao, Lan; Xin, Xiangjun

    2017-03-01

    The wavelength-division multiplexing passive optical network (WDM-PON) is a potential technology to carry multiple services in an optical access network. However, it has the disadvantages of high cost and an immature technique for users. A software-defined WDM/time-division multiplexing PON was proposed to meet the requirements of high bandwidth, high performance, and multiple services. A reasonable and effective uplink dynamic bandwidth allocation algorithm was proposed. A controller with dynamic wavelength and slot assignment was introduced, and a different optical dynamic bandwidth management strategy was formulated flexibly for services of different priorities according to the network loading. The simulation compares the proposed algorithm with the interleaved polling with adaptive cycle time algorithm. The algorithm shows better performance in average delay, throughput, and bandwidth utilization. The results show that the delay is reduced to 62% and the throughput is improved by 35%.

  5. Passive microwave soil moisture downscaling using vegetation index and skin surface temperature

    USDA-ARS?s Scientific Manuscript database

    Soil moisture satellite estimates are available from a variety of passive microwave satellite sensors, but their spatial resolution is frequently too coarse for use by land managers and other decision makers. In this paper, a soil moisture downscaling algorithm based on a regression relationship bet...

  6. Predicting patterns of vulnerability to climate change in near coastal species using an algorithm-based risk assessment framework

    EPA Science Inventory

    Near-coastal (0-200 depth) ecosystems and species are under threat from increasing temperatures, ocean acidification, and sea level rise. However, species vary in their vulnerability to specific climatic changes and climate impacts will vary geographically. For management to resp...

  7. A Left-Hemisphere Model for Right-Hemisphere Programmers.

    ERIC Educational Resources Information Center

    Krantz, Gordon C.

    The paper presents an action-and-decision (left-hemisphere) algorithm as a model for planning by holistic, intuitive (right-hemisphere) managers of service programs, including programs for exceptional children. Because the model is not based upon an established literature in the field of service to exceptional individuals, and because it appears…

  8. A New Artificial Neural Network Enhanced by the Shuffled Complex Evolution Optimization with Principal Component Analysis (SP-UCI) for Water Resources Management

    NASA Astrophysics Data System (ADS)

    Hayatbini, N.; Faridzad, M.; Yang, T.; Akbari Asanjan, A.; Gao, X.; Sorooshian, S.

    2016-12-01

    The Artificial Neural Networks (ANNs) are useful in many fields, including water resources engineering and management. However, due to the non-linear and chaotic characteristics associated with natural processes and human decision making, the use of ANNs in real-world applications is still limited, and its performance needs to be further improved for a broader practical use. The commonly used Back-Propagation (BP) scheme and gradient-based optimization in training the ANNs have already found to be problematic in some cases. The BP scheme and gradient-based optimization methods are associated with the risk of premature convergence, stuck in local optimums, and the searching is highly dependent on initial conditions. Therefore, as an alternative to BP and gradient-based searching scheme, we propose an effective and efficient global searching method, termed the Shuffled Complex Evolutionary Global optimization algorithm with Principal Component Analysis (SP-UCI), to train the ANN connectivity weights. Large number of real-world datasets are tested with the SP-UCI-based ANN, as well as various popular Evolutionary Algorithms (EAs)-enhanced ANNs, i.e., Particle Swarm Optimization (PSO)-, Genetic Algorithm (GA)-, Simulated Annealing (SA)-, and Differential Evolution (DE)-enhanced ANNs. Results show that SP-UCI-enhanced ANN is generally superior over other EA-enhanced ANNs with regard to the convergence and computational performance. In addition, we carried out a case study for hydropower scheduling in the Trinity Lake in the western U.S. In this case study, multiple climate indices are used as predictors for the SP-UCI-enhanced ANN. The reservoir inflows and hydropower releases are predicted up to sub-seasonal to seasonal scale. Results show that SP-UCI-enhanced ANN is able to achieve better statistics than other EAs-based ANN, which implies the usefulness and powerfulness of proposed SP-UCI-enhanced ANN for reservoir operation, water resources engineering and management. The SP-UCI-enhanced ANN is universally applicable to many other regression and prediction problems, and it has a good potential to be an alternative to the classical BP scheme and gradient-based optimization methods.

  9. Putting health status guided COPD management to the test: protocol of the MARCH study.

    PubMed

    Kocks, Janwillem; de Jong, Corina; Berger, Marjolein Y; Kerstjens, Huib A M; van der Molen, Thys

    2013-07-04

    Chronic Obstructive Pulmonary Disease (COPD) is a disease state characterized by airflow limitation that is not fully reversible and usually progressive. Current guidelines, among which the Dutch, have so far based their management strategy mainly on lung function impairment as measured by FEV1, while it is well known that FEV1 has a poor correlation with almost all features of COPD that matter to patients. Based on this discrepancy the GOLD 2011 update included symptoms and impact in their treatment algorithm proposal. Health status measures capture both symptoms and impact and could therefore be used as a standardized way to capture the information a doctor could otherwise only collect by careful history taking and recording. We hypothesize that a treatment algorithm that is based on a simple validated 10 item health status questionnaire, the Clinical COPD Questionnaire (CCQ), improves health status (as measured by SGRQ) and classical COPD outcomes like exacerbation frequency, patient satisfaction and health care utilization compared to usual care based on guidelines. This hypothesis will be tested in a randomized controlled trial (RCT) following 330 patients for two years. During this period general practitioners will receive treatment advices every four months that are based on the patient's health status (in half of the patients, intervention group) or on lung function (the remaining half of the patients, usual care group). During the design process, the selection of outcomes and the development of the treatment algorithm were challenging. This is discussed in detail in the manuscript to facilitate researchers in designing future studies in this changing field of implementation research. Netherlands Trial Register, NTR2643.

  10. "Symptom-based insulin adjustment for glucose normalization" (SIGN) algorithm: a pilot study.

    PubMed

    Lee, Joyce Yu-Chia; Tsou, Keith; Lim, Jiahui; Koh, Feaizen; Ong, Sooim; Wong, Sabrina

    2012-12-01

    Lack of self-monitoring of blood glucose (SMBG) records in actual practice settings continues to create therapeutic challenges for clinicians, especially in adjusting insulin therapy. In order to overcome this clinical obstacle, a "Symptom-based Insulin adjustment for Glucose Normalization" (SIGN) algorithm was developed to guide clinicians in caring for patients with uncontrolled type 2 diabetes who have few to no SMBG records. This study examined the clinical outcome and safety of the SIGN algorithm. Glycated hemoglobin (HbA1c), insulin usage, and insulin-related adverse effects of a total of 114 patients with uncontrolled type 2 diabetes who refused to use SMBG or performed SMBG once a day for less than three times per week were studied 3 months prior to the implementation of the algorithm and prospectively at every 3-month interval for a total of 6 months after the algorithm implementation. Patients with type 1 diabetes, nonadherence to diabetes medications, or who were not on insulin therapy at any time during the study period were excluded from this study. Mean HbA1c improved by 0.29% at 3 months (P = 0.015) and 0.41% at 6 months (P = 0.006) after algorithm implementation. A slight increase in HbA1c was observed when the algorithm was not implemented. There were no major hypoglycemic episodes. The number of minor hypoglycemic episodes was minimal with the majority of the cases due to irregular meal habits. The SIGN algorithm appeared to offer a viable and safe approach when managing uncontrolled patients with type 2 diabetes who have few to no SMBG records.

  11. Development of traffic control and queue management procedures for oversaturated arterials

    DOT National Transportation Integrated Search

    1997-01-01

    The formulation and solution of a new algorithm for queue management and coordination of traffic signals along oversaturated arterials are presented. Existing traffic-control and signal-coordination algorithms deal only with undersaturated steady-sta...

  12. Error Sources in Proccessing LIDAR Based Bridge Inspection

    NASA Astrophysics Data System (ADS)

    Bian, H.; Chen, S. E.; Liu, W.

    2017-09-01

    Bridge inspection is a critical task in infrastructure management and is facing unprecedented challenges after a series of bridge failures. The prevailing visual inspection was insufficient in providing reliable and quantitative bridge information although a systematic quality management framework was built to ensure visual bridge inspection data quality to minimize errors during the inspection process. The LiDAR based remote sensing is recommended as an effective tool in overcoming some of the disadvantages of visual inspection. In order to evaluate the potential of applying this technology in bridge inspection, some of the error sources in LiDAR based bridge inspection are analysed. The scanning angle variance in field data collection and the different algorithm design in scanning data processing are the found factors that will introduce errors into inspection results. Besides studying the errors sources, advanced considerations should be placed on improving the inspection data quality, and statistical analysis might be employed to evaluate inspection operation process that contains a series of uncertain factors in the future. Overall, the development of a reliable bridge inspection system requires not only the improvement of data processing algorithms, but also systematic considerations to mitigate possible errors in the entire inspection workflow. If LiDAR or some other technology can be accepted as a supplement for visual inspection, the current quality management framework will be modified or redesigned, and this would be as urgent as the refine of inspection techniques.

  13. Evidence based management of polyps of the gall bladder: A systematic review of the risk factors of malignancy.

    PubMed

    Bhatt, Nikita R; Gillis, Amy; Smoothey, Craig O; Awan, Faisal N; Ridgway, Paul F

    2016-10-01

    There are no evidence-based guidelines to dictate when Gallbladder Polyps (GBPs) of varying sizes should be resected. To identify factors that accurately predict malignant disease in GBP; to provide an evidence-based algorithm for management. A systematic review following PRISMA guidelines was performed using terms "gallbladder polyps" AND "polypoid lesion of gallbladder", from January 1993 and September 2013. Inclusion criteria required histopathological report or follow-up of 2 years. RTI-IB tool was used for quality analysis. Correlation with GBP size and malignant potential was analysed using Euclidean distance; a logistics mixed effects model was used for assessing independent risk factors for malignancy. Fifty-three articles were included in review. Data from 21 studies was pooled for analysis. Optimum size cut-off for resection of GBPs was 10 mm. Probability of malignancy is approximately zero at size <4.15 mm. Patient age >50 years, sessile and single polyps were independent risk factors for malignancy. For polyps sized 4 mm-10 mm, a risk assessment model was formulated. This review and analysis has provided an evidence-based algorithm for the management of GBPs. Longitudinal studies are needed to better understand the behaviour of polyps <10 mm, that are not at a high risk of malignancy, but may change over time. Copyright © 2016 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  14. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    NASA Astrophysics Data System (ADS)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  15. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    PubMed

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-21

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians' manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  16. Energy management and multi-layer control of networked microgrids

    NASA Astrophysics Data System (ADS)

    Zamora, Ramon

    Networked microgrids is a group of neighboring microgrids that has ability to interchange power when required in order to increase reliability and resiliency. Networked microgrid can operate in different possible configurations including: islanded microgrid, a grid-connected microgrid without a tie-line converter, a grid-connected microgrid with a tie-line converter, and networked microgrids. These possible configurations and specific characteristics of renewable energy offer challenges in designing control and management algorithms for voltage, frequency and power in all possible operating scenarios. In this work, control algorithm is designed based on large-signal model that enables microgrid to operate in wide range of operating points. A combination between PI controller and feed-forward measured system responses will compensate for the changes in operating points. The control architecture developed in this work has multi-layers and the outer layer is slower than the inner layer in time response. The main responsibility of the designed controls are to regulate voltage magnitude and frequency, as well as output power of the DG(s). These local controls also integrate with a microgrid level energy management system or microgrid central controller (MGCC) for power and energy balance for. the entire microgrid in islanded, grid-connected, or networked microgid mode. The MGCC is responsible to coordinate the lower level controls to have reliable and resilient operation. In case of communication network failure, the decentralized energy management will operate locally and will activate droop control. Simulation results indicate the superiority of designed control algorithms compared to existing ones.

  17. An Electronic Clinical Decision Support Tool to Assist Primary Care Providers in Cardiovascular Disease Risk Management: Development and Mixed Methods Evaluation

    PubMed Central

    Joshi, Rohina; Webster, Ruth J; Groenestein, Patrick; Usherwood, Tim P; Heeley, Emma; Turnbull, Fiona M; Lipman, Alexandra; Patel, Anushka A

    2009-01-01

    Background Challenges remain in translating the well-established evidence for management of cardiovascular disease (CVD) risk into clinical practice. Although electronic clinical decision support (CDS) systems are known to improve practitioner performance, their development in Australian primary health care settings is limited. Objectives Study aims were to (1) develop a valid CDS tool that assists Australian general practitioners (GPs) in global CVD risk management, and (2) preliminarily evaluate its acceptability to GPs as a point-of-care resource for both general and underserved populations. Methods CVD risk estimation (based on Framingham algorithms) and risk-based management advice (using recommendations from six Australian guidelines) were programmed into a software package. Tool validation: Data from 137 patients attending a physician’s clinic were analyzed to compare the tool’s risk scores with those obtained from an independently programmed algorithm in a separate statistics package. The tool’s management advice was compared with a physician’s recommendations based on a manual review of the guidelines. Field test: The tool was then tested with 21 GPs from eight general practices and three Aboriginal Medical Services. Customized CDS-based recommendations were generated for 200 routinely attending patients (33% Aboriginal) using information extracted from the health record by a research assistant. GPs reviewed these recommendations during each consultation. Changes in CVD risk factor measurement and management were recorded. In-depth interviews with GPs were conducted. Results Validation testing: The tool’s risk assessment algorithm correlated very highly with the independently programmed version in the separate statistics package (intraclass correlation coefficient 0.999). For management advice, there were only two cases of disagreement between the tool and the physician. Field test: GPs found 77% (153/200) of patient outputs easy to understand and agreed with screening and prescribing recommendations in 72% and 64% of outputs, respectively; 26% of patients had their CVD risk factor history updated; 73% had at least one CVD risk factor measured or tests ordered. For people assessed at high CVD risk (n = 82), 10% and 9%, respectively, had lipid-lowering and BP-lowering medications commenced or dose adjustments made, while 7% newly commenced anti-platelet medications. Three key qualitative findings emerged: (1) GPs found the tool enabled a systematic approach to care; (2) the tool greatly influenced CVD risk communication; (3) successful implementation into routine care would require integration with practice software, minimal data entry, regular revision with updated guidelines, and a self-auditing feature. There were no substantive differences in study findings for Aboriginal Medical Services GPs, and the tool was generally considered appropriate for use with Aboriginal patients. Conclusion A fully-integrated, self-populating, and potentially Internet-based CDS tool could contribute to improved global CVD risk management in Australian primary health care. The findings from this study will inform a large-scale trial intervention. PMID:20018588

  18. An electronic clinical decision support tool to assist primary care providers in cardiovascular disease risk management: development and mixed methods evaluation.

    PubMed

    Peiris, David P; Joshi, Rohina; Webster, Ruth J; Groenestein, Patrick; Usherwood, Tim P; Heeley, Emma; Turnbull, Fiona M; Lipman, Alexandra; Patel, Anushka A

    2009-12-17

    Challenges remain in translating the well-established evidence for management of cardiovascular disease (CVD) risk into clinical practice. Although electronic clinical decision support (CDS) systems are known to improve practitioner performance, their development in Australian primary health care settings is limited. Study aims were to (1) develop a valid CDS tool that assists Australian general practitioners (GPs) in global CVD risk management, and (2) preliminarily evaluate its acceptability to GPs as a point-of-care resource for both general and underserved populations. CVD risk estimation (based on Framingham algorithms) and risk-based management advice (using recommendations from six Australian guidelines) were programmed into a software package. Tool validation: Data from 137 patients attending a physician's clinic were analyzed to compare the tool's risk scores with those obtained from an independently programmed algorithm in a separate statistics package. The tool's management advice was compared with a physician's recommendations based on a manual review of the guidelines. Field test: The tool was then tested with 21 GPs from eight general practices and three Aboriginal Medical Services. Customized CDS-based recommendations were generated for 200 routinely attending patients (33% Aboriginal) using information extracted from the health record by a research assistant. GPs reviewed these recommendations during each consultation. Changes in CVD risk factor measurement and management were recorded. In-depth interviews with GPs were conducted. Validation testing: the tool's risk assessment algorithm correlated very highly with the independently programmed version in the separate statistics package (intraclass correlation coefficient 0.999). For management advice, there were only two cases of disagreement between the tool and the physician. Field test: GPs found 77% (153/200) of patient outputs easy to understand and agreed with screening and prescribing recommendations in 72% and 64% of outputs, respectively; 26% of patients had their CVD risk factor history updated; 73% had at least one CVD risk factor measured or tests ordered. For people assessed at high CVD risk (n = 82), 10% and 9%, respectively, had lipid-lowering and BP-lowering medications commenced or dose adjustments made, while 7% newly commenced anti-platelet medications. Three key qualitative findings emerged: (1) GPs found the tool enabled a systematic approach to care; (2) the tool greatly influenced CVD risk communication; (3) successful implementation into routine care would require integration with practice software, minimal data entry, regular revision with updated guidelines, and a self-auditing feature. There were no substantive differences in study findings for Aboriginal Medical Services GPs, and the tool was generally considered appropriate for use with Aboriginal patients. A fully-integrated, self-populating, and potentially Internet-based CDS tool could contribute to improved global CVD risk management in Australian primary health care. The findings from this study will inform a large-scale trial intervention.

  19. A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm

    PubMed Central

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  20. Applicability of an established management algorithm for colon injuries following blunt trauma.

    PubMed

    Sharpe, John P; Magnotti, Louis J; Weinberg, Jordan A; Shahan, Charles P; Cullinan, Darren R; Fabian, Timothy C; Croce, Martin A

    2013-02-01

    Operative management at our institution for all colon injuries have followed a defined algorithm (ALG) based on risk factors originally identified for penetrating injuries. The purpose of this study was to evaluate the applicability of the ALG to blunt colon injuries. Patients with blunt colon injuries during 13 years were identified. As per the ALG, nondestructive (ND) injuries are treated with primary repair. Patients with destructive wounds (serosal tear of ≥50% colon circumference, mesenteric devascularization, and perforations) and concomitant risk factors (transfusion of >6 U packed red blood cells and/or presence of significant comorbidities) are diverted, while patients with no risk factors undergo resection plus anastomosis (RA). Outcomes included suture line failure (SLF), abscess, and mortality. Stratification analysis was performed to determine additional risk factors in the management of blunt colon injuries. A total 151 patients were identified: 76 with destructive injuries and 75 with ND injuries. Of those with destructive injuries, 44 (59%) underwent RA and 29 (39%) underwent diversion. All ND injuries underwent primary repair. Adherence to the ALG was 95%: three patients with destructive injuries underwent primary repair, and five patients with risk factors underwent RA. There were three SLFs (2%) (one involved deviation from the ALG) and eight abscesses (5%). Colon-related mortality was 2.1%. Stratification analysis based on mesenteric involvement, degree of shock, and need for abbreviated laparotomy failed to identify additional risk factors for SLF following RA for blunt colon injuries. Adherence to an ALG, originally defined for penetrating colon injuries, simplified the management of blunt colon injuries. ND injuries should be primarily repaired. For destructive wounds, management based on a defined ALG achieves an acceptably low morbidity and mortality rate. Prognostic/epidemiologic study, level III; therapeutic study, level IV.

  1. Wildlife tradeoffs based on landscape models of habitat

    USGS Publications Warehouse

    Loehle, C.; Mitchell, M.S.

    2000-01-01

    It is becoming increasingly clear that the spatial structure of landscapes affects the habitat choices and abundance of wildlife. In contrast to wildlife management based on preservation of critical habitat features such as nest sites on a beach or mast trees, it has not been obvious how to incorporate spatial structure into management plans. We present techniques to accomplish this goal. We used multiscale logistic regression models developed previously for neotropical migrant bird species habitat use in South Carolina (USA) as a basis for these techniques. Based on these models we used a spatial optimization technique to generate optimal maps (probability of occurrence, P = 1.0) for each of seven species. To emulate management of a forest for maximum species diversity, we defined the objective function of the algorithm as the sum of probabilities over the seven species, resulting in a complex map that allowed all seven species to coexist. The map that allowed for coexistence is not obvious, must be computed algorithmically, and would be difficult to realize using rules of thumb for habitat management. To assess how management of a forest for a single species of interest might affect other species, we analyzed tradeoffs by gradually increasing the weighting on a single species in the objective function over a series of simulations. We found that as habitat was increasingly modified to favor that species, the probability of presence for two of the other species was driven to zero. This shows that whereas it is not possible to simultaneously maximize the likelihood of presence for multiple species with divergent habitat preferences, compromise solutions are possible at less than maximal likelihood in many cases. Our approach suggests that efficiency of habitat management for species diversity can by maximized for even small landscapes by incorporating spatial context. The methods we present are suitable for wildlife management, endangered species conservation, and nature reserve design.

  2. Effect of symptom-based risk stratification on the costs of managing patients with chronic rhinosinusitis symptoms.

    PubMed

    Tan, Bruce K; Lu, Guanning; Kwasny, Mary J; Hsueh, Wayne D; Shintani-Smith, Stephanie; Conley, David B; Chandra, Rakesh K; Kern, Robert C; Leung, Randy

    2013-11-01

    Current symptom criteria poorly predict a diagnosis of chronic rhinosinusitis (CRS) resulting in excessive treatment of patients with presumed CRS. The objective of this study was analyze the positive predictive value of individual symptoms, or symptoms in combination, in patients with CRS symptoms and examine the costs of the subsequent diagnostic algorithm using a decision tree-based cost analysis. We analyzed previously collected patient-reported symptoms from a cross-sectional study of patients who had received a computed tomography (CT) scan of their sinuses at a tertiary care otolaryngology clinic for evaluation of CRS symptoms to calculate the positive predictive value of individual symptoms. Classification and regression tree (CART) analysis then optimized combinations of symptoms and thresholds to identify CRS patients. The calculated positive predictive values were applied to a previously developed decision tree that compared an upfront CT (uCT) algorithm against an empiric medical therapy (EMT) algorithm with further analysis that considered the availability of point of care (POC) imaging. The positive predictive value of individual symptoms ranged from 0.21 for patients reporting forehead pain and to 0.69 for patients reporting hyposmia. The CART model constructed a dichotomous model based on forehead pain, maxillary pain, hyposmia, nasal discharge, and facial pain (C-statistic 0.83). If POC CT were available, median costs ($64-$415) favored using the upfront CT for all individual symptoms. If POC CT was unavailable, median costs favored uCT for most symptoms except intercanthal pain (-$15), hyposmia (-$100), and discolored nasal discharge (-$24), although these symptoms became equivocal on cost sensitivity analysis. The three-tiered CART model could subcategorize patients into tiers where uCT was always favorable (median costs: $332-$504) and others for which EMT was always favorable (median costs -$121 to -$275). The uCT algorithm was always more costly if the nasal endoscopy was positive. Among patients with classic CRS symptoms, the frequency of individual symptoms varied the likelihood of a CRS diagnosis marginally. Only hyposmia, the absence of facial pain, and discolored discharge sufficiently increased the likelihood of diagnosis to potentially make EMT less costly. The development of an evidence-based, multisymptom-based risk stratification model could substantially affect the management costs of the subsequent diagnostic algorithm. © 2013 ARS-AAOA, LLC.

  3. Estimating Western U.S. Reservoir Sedimentation

    NASA Astrophysics Data System (ADS)

    Bensching, L.; Livneh, B.; Greimann, B. P.

    2017-12-01

    Reservoir sedimentation is a long-term problem for water management across the Western U.S. Observations of sedimentation are limited to reservoir surveys that are costly and infrequent, with many reservoirs having only two or fewer surveys. This work aims to apply a recently developed ensemble of sediment algorithms to estimate reservoir sedimentation over several western U.S. reservoirs. The sediment algorithms include empirical, conceptual, stochastic, and processes based approaches and are coupled with a hydrologic modeling framework. Preliminary results showed that the more complex and processed based algorithms performed better in predicting high sediment flux values and in a basin transferability experiment. However, more testing and validation is required to confirm sediment model skill. This work is carried out in partnership with the Bureau of Reclamation with the goal of evaluating the viability of reservoir sediment yield prediction across the western U.S. using a multi-algorithm approach. Simulations of streamflow and sediment fluxes are validated against observed discharges, as well as a Reservoir Sedimentation Information database that is being developed by the US Army Corps of Engineers. Specific goals of this research include (i) quantifying whether inter-algorithm differences consistently capture observational variability; (ii) identifying whether certain categories of models consistently produce the best results, (iii) assessing the expected sedimentation life-span of several western U.S. reservoirs through long-term simulations.

  4. NLP based congestive heart failure case finding: A prospective analysis on statewide electronic medical records.

    PubMed

    Wang, Yue; Luo, Jin; Hao, Shiying; Xu, Haihua; Shin, Andrew Young; Jin, Bo; Liu, Rui; Deng, Xiaohong; Wang, Lijuan; Zheng, Le; Zhao, Yifan; Zhu, Chunqing; Hu, Zhongkai; Fu, Changlin; Hao, Yanpeng; Zhao, Yingzhen; Jiang, Yunliang; Dai, Dorothy; Culver, Devore S; Alfreds, Shaun T; Todd, Rogow; Stearns, Frank; Sylvester, Karl G; Widen, Eric; Ling, Xuefeng B

    2015-12-01

    In order to proactively manage congestive heart failure (CHF) patients, an effective CHF case finding algorithm is required to process both structured and unstructured electronic medical records (EMR) to allow complementary and cost-efficient identification of CHF patients. We set to identify CHF cases from both EMR codified and natural language processing (NLP) found cases. Using narrative clinical notes from all Maine Health Information Exchange (HIE) patients, the NLP case finding algorithm was retrospectively (July 1, 2012-June 30, 2013) developed with a random subset of HIE associated facilities, and blind-tested with the remaining facilities. The NLP based method was integrated into a live HIE population exploration system and validated prospectively (July 1, 2013-June 30, 2014). Total of 18,295 codified CHF patients were included in Maine HIE. Among the 253,803 subjects without CHF codings, our case finding algorithm prospectively identified 2411 uncodified CHF cases. The positive predictive value (PPV) is 0.914, and 70.1% of these 2411 cases were found to be with CHF histories in the clinical notes. A CHF case finding algorithm was developed, tested and prospectively validated. The successful integration of the CHF case findings algorithm into the Maine HIE live system is expected to improve the Maine CHF care. Copyright © 2015. Published by Elsevier Ireland Ltd.

  5. A mobile phone based tool to identify symptoms of common childhood diseases in Ghana: development and evaluation of the integrated clinical algorithm in a cross-sectional study.

    PubMed

    Franke, Konstantin H; Krumkamp, Ralf; Mohammed, Aliyu; Sarpong, Nimako; Owusu-Dabo, Ellis; Brinkel, Johanna; Fobil, Julius N; Marinovic, Axel Bonacic; Asihene, Philip; Boots, Mark; May, Jürgen; Kreuels, Benno

    2018-03-27

    The aim of this study was the development and evaluation of an algorithm-based diagnosis-tool, applicable on mobile phones, to support guardians in providing appropriate care to sick children. The algorithm was developed on the basis of the Integrated Management of Childhood Illness (IMCI) guidelines and evaluated at a hospital in Ghana. Two hundred and thirty-seven guardians applied the tool to assess their child's symptoms. Data recorded by the tool and health records completed by a physician were compared in terms of symptom detection, disease assessment and treatment recommendation. To compare both assessments, Kappa statistics and predictive values were calculated. The tool detected the symptoms of cough, fever, diarrhoea and vomiting with good agreement to the physicians' findings (kappa = 0.64; 0.59; 0.57 and 0.42 respectively). The disease assessment barely coincided with the physicians' findings. The tool's treatment recommendation correlated with the physicians' assessments in 93 out of 237 cases (39.2% agreement, kappa = 0.11), but underestimated a child's condition in only seven cases (3.0%). The algorithm-based tool achieved reliable symptom detection and treatment recommendations were administered conformably to the physicians' assessment. Testing in domestic environment is envisaged.

  6. Primary Repair of Moderate Severity Rhegmatogenous Retinal Detachment: A Critical Decision-Making Algorithm.

    PubMed

    Velez-Montoya, Raul; Jacobo-Oceguera, Paola; Flores-Preciado, Javier; Dalma-Weiszhausz, Jose; Guerrero-Naranjo, Jose; Salcedo-Villanueva, Guillermo; Garcia-Aguirre, Gerardo; Fromow-Guerra, Jans; Morales-Canton, Virgilio

    2016-01-01

    We reviewed all the available data regarding the current management of non-complex rhegmatogenous retinal detachment and aimed to propose a new decision-making algorithm aimed to improve the single surgery success rate for mid-severity rhegmatogenous retinal detachment. An online review of the Pubmed database was performed. We searched for all available manuscripts about the anatomical and functional outcomes after the surgical management, by either scleral buckle or primary pars plana vitrectomy, of retinal detachment. The search was limited to articles published from January 1995 to December 2015. All articles obtained from the search were carefully screened and their references were manually reviewed for additional relevant data. Our search specifically focused on preoperative clinical data that were associated with the surgical outcomes. After categorizing the available data according to their level of evidence, with randomized-controlled clinical trials as the highest possible level of evidence, followed by retrospective studies, and retrospective case series as the lowest level of evidence, we proceeded to design a logical decision-making algorithm, enhanced by our experiences as retinal surgeons. A total of 7 randomized-controlled clinical trials, 19 retrospective studies, and 9 case series were considered. Additional articles were also included in order to support the observations further. Rhegmatogenous retinal detachment is a potentially blinding disorder. Its surgical management seems to depend more on a surgeon´s preference than solid scientific data or is based on a good clinical history and examination. The algorithms proposed herein strive to offer a more rational approach to improve both anatomical and functional outcomes after the first surgery.

  7. Primary Repair of Moderate Severity Rhegmatogenous Retinal Detachment: A Critical Decision-Making Algorithm

    PubMed Central

    VELEZ-MONTOYA, Raul; JACOBO-OCEGUERA, Paola; FLORES-PRECIADO, Javier; DALMA-WEISZHAUSZ, Jose; GUERRERO-NARANJO, Jose; SALCEDO-VILLANUEVA, Guillermo; GARCIA-AGUIRRE, Gerardo; FROMOW-GUERRA, Jans; MORALES-CANTON, Virgilio

    2016-01-01

    We reviewed all the available data regarding the current management of non-complex rhegmatogenous retinal detachment and aimed to propose a new decision-making algorithm aimed to improve the single surgery success rate for mid-severity rhegmatogenous retinal detachment. An online review of the Pubmed database was performed. We searched for all available manuscripts about the anatomical and functional outcomes after the surgical management, by either scleral buckle or primary pars plana vitrectomy, of retinal detachment. The search was limited to articles published from January 1995 to December 2015. All articles obtained from the search were carefully screened and their references were manually reviewed for additional relevant data. Our search specifically focused on preoperative clinical data that were associated with the surgical outcomes. After categorizing the available data according to their level of evidence, with randomized-controlled clinical trials as the highest possible level of evidence, followed by retrospective studies, and retrospective case series as the lowest level of evidence, we proceeded to design a logical decision-making algorithm, enhanced by our experiences as retinal surgeons. A total of 7 randomized-controlled clinical trials, 19 retrospective studies, and 9 case series were considered. Additional articles were also included in order to support the observations further. Rhegmatogenous retinal detachment is a potentially blinding disorder. Its surgical management seems to depend more on a surgeon´s preference than solid scientific data or is based on a good clinical history and examination. The algorithms proposed herein strive to offer a more rational approach to improve both anatomical and functional outcomes after the first surgery. PMID:28289689

  8. Bridge health monitoring metrics : updating the bridge deficiency algorithm.

    DOT National Transportation Integrated Search

    2009-10-01

    As part of its bridge management system, the Alabama Department of Transportation (ALDOT) must decide how best to spend its bridge replacement funds. In making these decisions, ALDOT managers currently use a deficiency algorithm to rank bridges that ...

  9. Event-driven management algorithm of an Engineering documents circulation system

    NASA Astrophysics Data System (ADS)

    Kuzenkov, V.; Zebzeev, A.; Gromakov, E.

    2015-04-01

    Development methodology of an engineering documents circulation system in the design company is reviewed. Discrete event-driven automatic models using description algorithms of project management is offered. Petri net use for dynamic design of projects is offered.

  10. Traumatic brain injury with a machete penetrating the dura and brain: Case report from southeast Mexico.

    PubMed

    Del Castillo-Calcáneo, Juan D; Bravo-Angel, Ulises; Mendez-Olan, Raúl; Rodriguez-Valencia, Francisco; Valdés-García, Javier; García-González, Ulises; Broc-Haro, Guy G

    2016-01-01

    Traumatic Brain Injury (TBI) is a major cause of death and disability in our society, we present the first case report of non-missile penetrating (NMP) cranial trauma with a machete in Mexico, and our objective by presenting this case is to prove the usefulness of recently proposed algorithms in the treatment of NMP PRESENTATION OF CASE: We present the case of a 47 year old woman who received a machete hit to the right side of her head during an assault., she arrived fully conscious to the emergency department (ED), computed tomography was performed and based on the findings of this study and in accordance to recently proposed algorithms for managing NMP cranial trauma a craniotomy was performed, at follow-up the patient presented wtih minor neurological disability in the form of left hemiparesis. Non-missile penetrating (NMP) lesions are defined as having an impact velocity of less than 100m/s, causing injury by laceration and maceration, An algorithm for treating NMP cranial trauma has been recently published in the Journal World Neurosurgery by De Holanda et al., in this case we followed the algorithm in order to provide best care available for our patient with good results. The use of current algorithms for managing NMP cranial trauma has proved to be very useful when applied on this particular case. GCS on admission is an important prognostic factor in NMP cranial trauma. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  11. Intelligent deflection routing in buffer-less networks.

    PubMed

    Haeri, Soroush; Trajković, Ljiljana

    2015-02-01

    Deflection routing is employed to ameliorate packet loss caused by contention in buffer-less architectures such as optical burst-switched networks. The main goal of deflection routing is to successfully deflect a packet based only on a limited knowledge that network nodes possess about their environment. In this paper, we present a framework that introduces intelligence to deflection routing (iDef). iDef decouples the design of the signaling infrastructure from the underlying learning algorithm. It consists of a signaling and a decision-making module. Signaling module implements a feedback management protocol while the decision-making module implements a reinforcement learning algorithm. We also propose several learning-based deflection routing protocols, implement them in iDef using the ns-3 network simulator, and compare their performance.

  12. (LBA-and-WRM)-based DBA scheme for multi-wavelength upstream transmission supporting 10 Gbps and 1 Gbps in MAN

    NASA Astrophysics Data System (ADS)

    Zhang, Yuchao; Gan, Chaoqin; Gou, Kaiyu; Xu, Anni; Ma, Jiamin

    2018-01-01

    DBA scheme based on Load balance algorithm (LBA) and wavelength recycle mechanism (WRM) for multi-wavelength upstream transmission is proposed in this paper. According to 1 Gbps and 10 Gbps line rates, ONUs are grouped into different VPONs. To facilitate wavelength management, resource pool is proposed to record wavelength state. To realize quantitative analysis, a mathematical model describing metro-access network (MAN) environment is presented. To 10G-EPON upstream, load balance algorithm is designed to ensure load distribution fairness for 10G-OLTs. To 1G-EPON upstream, wavelength recycle mechanism is designed to share remained wavelengths. Finally, the effectiveness of the proposed scheme is demonstrated by simulation and analysis.

  13. 3D Kirchhoff depth migration algorithm: A new scalable approach for parallelization on multicore CPU based cluster

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran

    2017-03-01

    In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.

  14. Case-Mix for Performance Management: A Risk Algorithm Based on ICD-10-CM.

    PubMed

    Gao, Jian; Moran, Eileen; Almenoff, Peter L

    2018-06-01

    Accurate risk adjustment is the key to a reliable comparison of cost and quality performance among providers and hospitals. However, the existing case-mix algorithms based on age, sex, and diagnoses can only explain up to 50% of the cost variation. More accurate risk adjustment is desired for provider performance assessment and improvement. To develop a case-mix algorithm that hospitals and payers can use to measure and compare cost and quality performance of their providers. All 6,048,895 patients with valid diagnoses and cost recorded in the US Veterans health care system in fiscal year 2016 were included in this study. The dependent variable was total cost at the patient level, and the explanatory variables were age, sex, and comorbidities represented by 762 clinically homogeneous groups, which were created by expanding the 283 categories from Clinical Classifications Software based on ICD-10-CM codes. The split-sample method was used to assess model overfitting and coefficient stability. The predictive power of the algorithms was ascertained by comparing the R, mean absolute percentage error, root mean square error, predictive ratios, and c-statistics. The expansion of the Clinical Classifications Software categories resulted in higher predictive power. The R reached 0.72 and 0.52 for the transformed and raw scale cost, respectively. The case-mix algorithm we developed based on age, sex, and diagnoses outperformed the existing case-mix models reported in the literature. The method developed in this study can be used by other health systems to produce tailored risk models for their specific purpose.

  15. Iron deficiency anemia: diagnosis and management.

    PubMed

    Clark, Susan F

    2009-03-01

    Iron deficiency anemia (IDA) still remains universally problematic worldwide. The primary focus of this review is to critique articles published over the past 18 months that describe strategies for the diagnosis and management of this prevalent condition. The medical community continues to lack consensus when identifying the optimal approach for the diagnosis and management of IDA. Current diagnostic recommendations revolve around the validity and practicality of current biomarkers such as soluble transferrin-receptor concentrations and others, and cause-based diagnostics that potentially include endoscopy. Management of IDA is based on supplementation combined with effective etiological treatment. Advances in oral and parenteral low-molecular-weight iron preparations has expanded and improved treatment modalities for IDA. Since the introduction of low versus high-molecular-weight intravenous iron administration, there have been fewer serious adverse events associated with parenteral iron preparations. Best practice guidelines for diagnosing and managing IDA should include the design of an algorithm that is inclusive of multiple biomarkers and cause-based diagnostics, which will provide direction in managing IDA, and distinguish between IDA from the anemia of chronic disease.

  16. An orbital emulator for pursuit-evasion game theoretic sensor management

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Wang, Tao; Wang, Gang; Jia, Bin; Wang, Zhonghai; Chen, Genshe; Blasch, Erik; Pham, Khanh

    2017-05-01

    This paper develops and evaluates an orbital emulator (OE) for space situational awareness (SSA). The OE can produce 3D satellite movements using capabilities generated from omni-wheeled robot and robotic arm motion methods. The 3D motion of a satellite is partitioned into the movements in the equatorial plane and the up-down motions in the vertical plane. The 3D actions are emulated by omni-wheeled robot models while the up-down motions are performed by a stepped-motor-controlled-ball along a rod (robotic arm), which is attached to the robot. For multiple satellites, a fast map-merging algorithm is integrated into the robot operating system (ROS) and simultaneous localization and mapping (SLAM) routines to locate the multiple robots in the scene. The OE is used to demonstrate a pursuit-evasion (PE) game theoretic sensor management algorithm, which models conflicts between a space-based-visible (SBV) satellite (as pursuer) and a geosynchronous (GEO) satellite (as evader). The cost function of the PE game is based on the informational entropy of the SBV-tracking-GEO scenario. GEO can maneuver using a continuous and low thruster. The hard-in-loop space emulator visually illustrates the SSA problem solution based PE game.

  17. Co-evolutionary data mining for fuzzy rules: automatic fitness function creation phase space, and experiments

    NASA Astrophysics Data System (ADS)

    Smith, James F., III; Blank, Joseph A.

    2003-03-01

    An approach is being explored that involves embedding a fuzzy logic based resource manager in an electronic game environment. Game agents can function under their own autonomous logic or human control. This approach automates the data mining problem. The game automatically creates a cleansed database reflecting the domain expert's knowledge, it calls a data mining function, a genetic algorithm, for data mining of the data base as required and allows easy evaluation of the information extracted. The co-evolutionary fitness functions, chromosomes and stopping criteria for ending the game are discussed. Genetic algorithm and genetic program based data mining procedures are discussed that automatically discover new fuzzy rules and strategies. The strategy tree concept and its relationship to co-evolutionary data mining are examined as well as the associated phase space representation of fuzzy concepts. The overlap of fuzzy concepts in phase space reduces the effective strategies available to adversaries. Co-evolutionary data mining alters the geometric properties of the overlap region known as the admissible region of phase space significantly enhancing the performance of the resource manager. Procedures for validation of the information data mined are discussed and significant experimental results provided.

  18. Diabetes mellitus and stroke: A clinical update

    PubMed Central

    Tun, Nyo Nyo; Arunagirinathan, Ganesan; Munshi, Sunil K; Pappachan, Joseph M

    2017-01-01

    Cardiovascular disease including stroke is a major complication that tremendously increases the morbidity and mortality in patients with diabetes mellitus (DM). DM poses about four times higher risk for stroke. Cardiometabolic risk factors including obesity, hypertension, and dyslipidaemia often co-exist in patients with DM that add on to stroke risk. Because of the strong association between DM and other stroke risk factors, physicians and diabetologists managing patients should have thorough understanding of these risk factors and management. This review is an evidence-based approach to the epidemiological aspects, pathophysiology, diagnostic work up and management algorithms for patients with diabetes and stroke. PMID:28694925

  19. Identification of Physician-Diagnosed Alzheimer's Disease and Related Dementias in Population-Based Administrative Data: A Validation Study Using Family Physicians' Electronic Medical Records.

    PubMed

    Jaakkimainen, R Liisa; Bronskill, Susan E; Tierney, Mary C; Herrmann, Nathan; Green, Diane; Young, Jacqueline; Ivers, Noah; Butt, Debra; Widdifield, Jessica; Tu, Karen

    2016-08-10

    Population-based surveillance of Alzheimer's and related dementias (AD-RD) incidence and prevalence is important for chronic disease management and health system capacity planning. Algorithms based on health administrative data have been successfully developed for many chronic conditions. The increasing use of electronic medical records (EMRs) by family physicians (FPs) provides a novel reference standard by which to evaluate these algorithms as FPs are the first point of contact and providers of ongoing medical care for persons with AD-RD. We used FP EMR data as the reference standard to evaluate the accuracy of population-based health administrative data in identifying older adults with AD-RD over time. This retrospective chart abstraction study used a random sample of EMRs for 3,404 adults over 65 years of age from 83 community-based FPs in Ontario, Canada. AD-RD patients identified in the EMR were used as the reference standard against which algorithms identifying cases of AD-RD in administrative databases were compared. The highest performing algorithm was "one hospitalization code OR (three physician claims codes at least 30 days apart in a two year period) OR a prescription filled for an AD-RD specific medication" with sensitivity 79.3% (confidence interval (CI) 72.9-85.8%), specificity 99.1% (CI 98.8-99.4%), positive predictive value 80.4% (CI 74.0-86.8%), and negative predictive value 99.0% (CI 98.7-99.4%). This resulted in an age- and sex-adjusted incidence of 18.1 per 1,000 persons and adjusted prevalence of 72.0 per 1,000 persons in 2010/11. Algorithms developed from health administrative data are sensitive and specific for identifying older adults with AD-RD.

  20. Optimum Guidance Law and Information Management for a Large Number of Formation Flying Spacecrafts

    NASA Astrophysics Data System (ADS)

    Tsuda, Yuichi; Nakasuka, Shinichi

    In recent years, formation flying technique is recognized as one of the most important technologies for deep space and orbital missions that involve multiple spacecraft operations. Formation flying mission improves simultaneous observability over a wide area, redundancy and reconfigurability of the system with relatively small and low cost spacecrafts compared with the conventional single spacecraft mission. From the viewpoint of guidance and control, realizing formation flying mission usually requires tight maintenance and control of the relative distances, speeds and orientations between the member satellites. This paper studies a practical architecture for formation flight missions focusing mainly on guidance and control, and describes a new guidance algorithm for changing and keeping the relative positions and speeds of the satellites in formation. The resulting algorithm is suitable for onboard processing and gives the optimum impulsive trajectory for satellites flying closely around a certain reference orbit, that can be elliptic, parabolic or hyperbolic. Based on this guidance algorithm, this study introduces an information management methodology between the member spacecrafts which is suitable for a large formation flight architecture. Routing and multicast communication based on the wireless local area network technology are introduced. Some mathematical analyses and computer simulations will be shown in the presentation to reveal the feasibility of the proposed formation flight architecture, especially when a very large number of satellites join the formation.

  1. Diagnosing Sexual Dysfunction in Men and Women: Sexual History Taking and the Role of Symptom Scales and Questionnaires.

    PubMed

    Hatzichristou, Dimitris; Kirana, Paraskevi-Sofia; Banner, Linda; Althof, Stanley E; Lonnee-Hoffmann, Risa A M; Dennerstein, Lorraine; Rosen, Raymond C

    2016-08-01

    A detailed sexual history is the cornerstone for all sexual problem assessments and sexual dysfunction diagnoses. Diagnostic evaluation is based on an in-depth sexual history, including sexual and gender identity and orientation, sexual activity and function, current level of sexual function, overall health and comorbidities, partner relationship and interpersonal factors, and the role of cultural and personal expectations and attitudes. To propose key steps in the diagnostic evaluation of sexual dysfunctions, with special focus on the use of symptom scales and questionnaires. Critical assessment of the current literature by the International Consultation on Sexual Medicine committee. A revised algorithm for the management of sexual dysfunctions, level of evidence, and recommendation for scales and questionnaires. The International Consultation on Sexual Medicine proposes an updated algorithm for diagnostic evaluation of sexual dysfunction in men and women, with specific recommendations for sexual history taking and diagnostic evaluation. Standardized scales, checklists, and validated questionnaires are additional adjuncts that should be used routinely in sexual problem evaluation. Scales developed for specific patient groups are included. Results of this evaluation are presented with recommendations for clinical and research uses. Defined principles, an algorithm and a range of scales may provide coherent and evidence based management for sexual dysfunctions. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  2. Havens: Explicit Reliable Memory Regions for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    2016-01-01

    Supporting error resilience in future exascale-class supercomputing systems is a critical challenge. Due to transistor scaling trends and increasing memory density, scientific simulations are expected to experience more interruptions caused by transient errors in the system memory. Existing hardware-based detection and recovery techniques will be inadequate to manage the presence of high memory fault rates. In this paper we propose a partial memory protection scheme based on region-based memory management. We define the concept of regions called havens that provide fault protection for program objects. We provide reliability for the regions through a software-based parity protection mechanism. Our approach enablesmore » critical program objects to be placed in these havens. The fault coverage provided by our approach is application agnostic, unlike algorithm-based fault tolerance techniques.« less

  3. Using background knowledge for picture organization and retrieval

    NASA Astrophysics Data System (ADS)

    Quintana, Yuri

    1997-01-01

    A picture knowledge base management system is described that is used to represent, organize and retrieve pictures from a frame knowledge base. Experiments with human test subjects were conducted to obtain further descriptions of pictures from news magazines. These descriptions were used to represent the semantic content of pictures in frame representations. A conceptual clustering algorithm is described which organizes pictures not only on the observable features, but also on implicit properties derived from the frame representations. The algorithm uses inheritance reasoning to take into account background knowledge in the clustering. The algorithm creates clusters of pictures using a group similarity function that is based on the gestalt theory of picture perception. For each cluster created, a frame is generated which describes the semantic content of pictures in the cluster. Clustering and retrieval experiments were conducted with and without background knowledge. The paper shows how the use of background knowledge and semantic similarity heuristics improves the speed, precision, and recall of queries processed. The paper concludes with a discussion of how natural language processing of can be used to assist in the development of knowledge bases and the processing of user queries.

  4. Personalized Medicine and Opioid Analgesic Prescribing for Chronic Pain: Opportunities and Challenges

    PubMed Central

    Bruehl, Stephen; Apkarian, A. Vania; Ballantyne, Jane C.; Berger, Ann; Borsook, David; Chen, Wen G.; Farrar, John T.; Haythornthwaite, Jennifer A.; Horn, Susan D.; Iadarola, Michael J.; Inturrisi, Charles E.; Lao, Lixing; Mackey, Sean; Mao, Jianren; Sawczuk, Andrea; Uhl, George R.; Witter, James; Woolf, Clifford J.; Zubieta, Jon-Kar; Lin, Yu

    2013-01-01

    Use of opioid analgesics for pain management has increased dramatically over the past decade, with corresponding increases in negative sequelae including overdose and death. There is currently no well-validated objective means of accurately identifying patients likely to experience good analgesia with low side effects and abuse risk prior to initiating opioid therapy. This paper discusses the concept of data-based personalized prescribing of opioid analgesics as a means to achieve this goal. Strengths, weaknesses, and potential synergism of traditional randomized placebo-controlled trial (RCT) and practice-based evidence (PBE) methodologies as means to acquire the clinical data necessary to develop validated personalized analgesic prescribing algorithms are overviewed. Several predictive factors that might be incorporated into such algorithms are briefly discussed, including genetic factors, differences in brain structure and function, differences in neurotransmitter pathways, and patient phenotypic variables such as negative affect, sex, and pain sensitivity. Currently available research is insufficient to inform development of quantitative analgesic prescribing algorithms. However, responder subtype analyses made practical by the large numbers of chronic pain patients in proposed collaborative PBE pain registries, in conjunction with follow-up validation RCTs, may eventually permit development of clinically useful analgesic prescribing algorithms. Perspective Current research is insufficient to base opioid analgesic prescribing on patient characteristics. Collaborative PBE studies in large, diverse pain patient samples in conjunction with follow-up RCTs may permit development of quantitative analgesic prescribing algorithms which could optimize opioid analgesic effectiveness, and mitigate risks of opioid-related abuse and mortality. PMID:23374939

  5. A real time sorting algorithm to time sort any deterministic time disordered data stream

    NASA Astrophysics Data System (ADS)

    Saini, J.; Mandal, S.; Chakrabarti, A.; Chattopadhyay, S.

    2017-12-01

    In new generation high intensity high energy physics experiments, millions of free streaming high rate data sources are to be readout. Free streaming data with associated time-stamp can only be controlled by thresholds as there is no trigger information available for the readout. Therefore, these readouts are prone to collect large amount of noise and unwanted data. For this reason, these experiments can have output data rate of several orders of magnitude higher than the useful signal data rate. It is therefore necessary to perform online processing of the data to extract useful information from the full data set. Without trigger information, pre-processing on the free streaming data can only be done with time based correlation among the data set. Multiple data sources have different path delays and bandwidth utilizations and therefore the unsorted merged data requires significant computational efforts for real time manifestation of sorting before analysis. Present work reports a new high speed scalable data stream sorting algorithm with its architectural design, verified through Field programmable Gate Array (FPGA) based hardware simulation. Realistic time based simulated data likely to be collected in an high energy physics experiment have been used to study the performance of the algorithm. The proposed algorithm uses parallel read-write blocks with added memory management and zero suppression features to make it efficient for high rate data-streams. This algorithm is best suited for online data streams with deterministic time disorder/unsorting on FPGA like hardware.

  6. Electrons and photons at High Level Trigger in CMS for Run II

    NASA Astrophysics Data System (ADS)

    Anuar, Afiq A.

    2015-12-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increase in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. New approaches have been studied to keep the HLT output rate manageable while maintaining thresholds low enough to cover physics analyses. The strategy mainly relies on porting online the ingredients that have been successfully applied in the offline reconstruction, thus allowing to move HLT selection closer to offline cuts. Improvements in HLT electron and photon definitions will be presented, focusing in particular on: updated clustering algorithm and the energy calibration procedure, new Particle-Flow-based isolation approach and pileup mitigation techniques, and the electron-dedicated track fitting algorithm based on Gaussian Sum Filter.

  7. Sniffer Channel Selection for Monitoring Wireless LANs

    NASA Astrophysics Data System (ADS)

    Song, Yuan; Chen, Xian; Kim, Yoo-Ah; Wang, Bing; Chen, Guanling

    Wireless sniffers are often used to monitor APs in wireless LANs (WLANs) for network management, fault detection, traffic characterization, and optimizing deployment. It is cost effective to deploy single-radio sniffers that can monitor multiple nearby APs. However, since nearby APs often operate on orthogonal channels, a sniffer needs to switch among multiple channels to monitor its nearby APs. In this paper, we formulate and solve two optimization problems on sniffer channel selection. Both problems require that each AP be monitored by at least one sniffer. In addition, one optimization problem requires minimizing the maximum number of channels that a sniffer listens to, and the other requires minimizing the total number of channels that the sniffers listen to. We propose a novel LP-relaxation based algorithm, and two simple greedy heuristics for the above two optimization problems. Through simulation, we demonstrate that all the algorithms are effective in achieving their optimization goals, and the LP-based algorithm outperforms the greedy heuristics.

  8. A Fuel-Efficient Conflict Resolution Maneuver for Separation Assurance

    NASA Technical Reports Server (NTRS)

    Bowe, Aisha Ruth; Santiago, Confesor

    2012-01-01

    Automated separation assurance algorithms are envisioned to play an integral role in accommodating the forecasted increase in demand of the National Airspace System. Developing a robust, reliable, air traffic management system involves safely increasing efficiency and throughput while considering the potential impact on users. This experiment seeks to evaluate the benefit of augmenting a conflict detection and resolution algorithm to consider a fuel efficient, Zero-Delay Direct-To maneuver, when resolving a given conflict based on either minimum fuel burn or minimum delay. A total of twelve conditions were tested in a fast-time simulation conducted in three airspace regions with mixed aircraft types and light weather. Results show that inclusion of this maneuver has no appreciable effect on the ability of the algorithm to safely detect and resolve conflicts. The results further suggest that enabling the Zero-Delay Direct-To maneuver significantly increases the cumulative fuel burn savings when choosing resolution based on minimum fuel burn while marginally increasing the average delay per resolution.

  9. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID

    PubMed Central

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-01-01

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm–neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS. PMID:29671822

  10. Managing Algorithmic Skeleton Nesting Requirements in Realistic Image Processing Applications: The Case of the SKiPPER-II Parallel Programming Environment's Operating Model

    NASA Astrophysics Data System (ADS)

    Coudarcher, Rémi; Duculty, Florent; Serot, Jocelyn; Jurie, Frédéric; Derutin, Jean-Pierre; Dhome, Michel

    2005-12-01

    SKiPPER is a SKeleton-based Parallel Programming EnviRonment being developed since 1996 and running at LASMEA Laboratory, the Blaise-Pascal University, France. The main goal of the project was to demonstrate the applicability of skeleton-based parallel programming techniques to the fast prototyping of reactive vision applications. This paper deals with the special features embedded in the latest version of the project: algorithmic skeleton nesting capabilities and a fully dynamic operating model. Throughout the case study of a complete and realistic image processing application, in which we have pointed out the requirement for skeleton nesting, we are presenting the operating model of this feature. The work described here is one of the few reported experiments showing the application of skeleton nesting facilities for the parallelisation of a realistic application, especially in the area of image processing. The image processing application we have chosen is a 3D face-tracking algorithm from appearance.

  11. Identification of Patients with Family History of Pancreatic Cancer - Investigation of an NLP System Portability

    PubMed Central

    Mehrabi, Saeed; Krishnan, Anand; Roch, Alexandra M; Schmidt, Heidi; Li, DingCheng; Kesterson, Joe; Beesley, Chris; Dexter, Paul; Schmidt, Max; Palakal, Mathew; Liu, Hongfang

    2018-01-01

    In this study we have developed a rule-based natural language processing (NLP) system to identify patients with family history of pancreatic cancer. The algorithm was developed in a Unstructured Information Management Architecture (UIMA) framework and consisted of section segmentation, relation discovery, and negation detection. The system was evaluated on data from two institutions. The family history identification precision was consistent across the institutions shifting from 88.9% on Indiana University (IU) dataset to 87.8% on Mayo Clinic dataset. Customizing the algorithm on the the Mayo Clinic data, increased its precision to 88.1%. The family member relation discovery achieved precision, recall, and F-measure of 75.3%, 91.6% and 82.6% respectively. Negation detection resulted in precision of 99.1%. The results show that rule-based NLP approaches for specific information extraction tasks are portable across institutions; however customization of the algorithm on the new dataset improves its performance. PMID:26262122

  12. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID.

    PubMed

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-04-19

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm⁻neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS.

  13. [Facial palsy: diagnosis and management by primary care physicians].

    PubMed

    Alvarez, V; Dussoix, P; Gaspoz, J-M

    2009-01-28

    The incidence of facial palsy is about 50/100000/year, i.e. 210 cases/year in Geneva. Clinicians can be puzzled by it, because it encompasses aetiologies with very diverse prognoses. Most patients suffer from Bell palsy that evolves favourably. Some, however, suffer from diseases such as meningitis, HIV infection, Lyme's disease, CVA, that require fast identification because of their severity and of the need for specific treatments. This article proposes an algorithm for pragmatic and evidence-based management of facial palsy.

  14. Dynamic game balancing implementation using adaptive algorithm in mobile-based Safari Indonesia game

    NASA Astrophysics Data System (ADS)

    Yuniarti, Anny; Nata Wardanie, Novita; Kuswardayan, Imam

    2018-03-01

    In developing a game there is one method that should be applied to maintain the interest of players, namely dynamic game balancing. Dynamic game balancing is a process to match a player’s playing style with the behaviour, attributes, and game environment. This study applies dynamic game balancing using adaptive algorithm in scrolling shooter game type called Safari Indonesia which developed using Unity. The game of this type is portrayed by a fighter aircraft character trying to defend itself from insistent enemy attacks. This classic game is chosen to implement adaptive algorithms because it has quite complex attributes to be developed using dynamic game balancing. Tests conducted by distributing questionnaires to a number of players indicate that this method managed to reduce frustration and increase the pleasure factor in playing.

  15. Study of parameters of the nearest neighbour shared algorithm on clustering documents

    NASA Astrophysics Data System (ADS)

    Mustika Rukmi, Alvida; Budi Utomo, Daryono; Imro’atus Sholikhah, Neni

    2018-03-01

    Document clustering is one way of automatically managing documents, extracting of document topics and fastly filtering information. Preprocess of clustering documents processed by textmining consists of: keyword extraction using Rapid Automatic Keyphrase Extraction (RAKE) and making the document as concept vector using Latent Semantic Analysis (LSA). Furthermore, the clustering process is done so that the documents with the similarity of the topic are in the same cluster, based on the preprocesing by textmining performed. Shared Nearest Neighbour (SNN) algorithm is a clustering method based on the number of "nearest neighbors" shared. The parameters in the SNN Algorithm consist of: k nearest neighbor documents, ɛ shared nearest neighbor documents and MinT minimum number of similar documents, which can form a cluster. Characteristics The SNN algorithm is based on shared ‘neighbor’ properties. Each cluster is formed by keywords that are shared by the documents. SNN algorithm allows a cluster can be built more than one keyword, if the value of the frequency of appearing keywords in document is also high. Determination of parameter values on SNN algorithm affects document clustering results. The higher parameter value k, will increase the number of neighbor documents from each document, cause similarity of neighboring documents are lower. The accuracy of each cluster is also low. The higher parameter value ε, caused each document catch only neighbor documents that have a high similarity to build a cluster. It also causes more unclassified documents (noise). The higher the MinT parameter value cause the number of clusters will decrease, since the number of similar documents can not form clusters if less than MinT. Parameter in the SNN Algorithm determine performance of clustering result and the amount of noise (unclustered documents ). The Silhouette coeffisient shows almost the same result in many experiments, above 0.9, which means that SNN algorithm works well with different parameter values.

  16. Presence-only Species Distribution Modeling for King Mackerel (Scomberomorus cavalla) and its 31 Prey Species in the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Cai, X.; Simons, J.; Carollo, C.; Sterba-Boatwright, B.; Sadovski, A.

    2016-02-01

    Ecosystem based fisheries management has been broadly recognized throughout the world as a way to achieve better conservation. Therefore, there is a strong need for mapping of multi-species interactions or spatial distributions. Species distribution models are widely applied since information regarding the presence of species is usually only available for limited locations due to the high cost of fisheries surveys. Instead of regular presence and absence records, a large proportion of the fisheries survey data have only presence records. This makes the modeling problem one of one-class classification (presence only), which is much more complex than the regular two-class classification (presence/absence). In this study, four different presence-only species distribution algorithms (Bioclim, Domain, Mahal and Maxent) were applied using 13 environmental parameters (e.g., depth, DO, bottom types) as predictors to model the distribution of king mackerel (Scomberomorus cavalla) and its 31 prey species in the Gulf of Mexico (a total of 13625 georeferenced presence records from OBIS and GBIF were used). Five-fold cross validations were applied for each of the 128 (4 algorithms × 32 species) models. Area under curve (AUC) and correlation coefficient (R) were used to evaluate the model performances. The AUC of the models based on these four algorithms were 0.83±0.14, 0.77±0.16, 0.94±0.06 and 0.94±0.06, respectively; while R for the models were 0.47±0.27, 0.43±0.24, 0.27±0.16 and 0.76±0.16, respectively. Post hoc with Tukey's test showed that AUC for the Maxent-based models were significantly (p<0.05) higher than those for Bioclim and Domain based models, but insignificantly different from those for Mahal-based models (p=0.955); while R for the Maxent-based models were significantly higher than those for all the other three types of models (p<0.05). Thus, we concluded that the Maxent-based models had the best performance. High AUC and R also indicated that Maxent-based models could provide robust and reliable results to model target species distributions, and they can be further used to model the king mackerel food web in the Gulf of Mexico to help managers better manage related fisheries resources.

  17. Improving nitrogen fertilizer use efficiency in surface- and overhead sprinkler-irrigated cotton in the desert southwest

    USDA-ARS?s Scientific Manuscript database

    Nitrogen fertilizer use efficiency (NUE) is low in surface-irrigated cotton (Gossypium hirsutum L.), especially when adding N to irrigation water. A NO3 soil-test algorithm was compared with canopy reflectance-based N management with surface- overhead sprinkler-irrigation in Central AZ. The surfac...

  18. Transcultural Diabetes Nutrition Algorithm: A Malaysian Application

    PubMed Central

    Hamdy, Osama; Chin Chia, Yook; Lin Lim, Shueh; Kumari Natkunam, Santha; Yeong Tan, Ming; Sulaiman, Ridzoni; Nisak, Barakatun; Chee, Winnie Siew Swee; Marchetti, Albert; Hegazi, Refaat A.; Mechanick, Jeffrey I.

    2013-01-01

    Glycemic control among patients with prediabetes and type 2 diabetes mellitus (T2D) in Malaysia is suboptimal, especially after the continuous worsening over the past decade. Improved glycemic control may be achieved through a comprehensive management strategy that includes medical nutrition therapy (MNT). Evidence-based recommendations for diabetes-specific therapeutic diets are available internationally. However, Asian patients with T2D, including Malaysians, have unique disease characteristics and risk factors, as well as cultural and lifestyle dissimilarities, which may render international guidelines and recommendations less applicable and/or difficult to implement. With these thoughts in mind, a transcultural Diabetes Nutrition Algorithm (tDNA) was developed by an international task force of diabetes and nutrition experts through the restructuring of international guidelines for the nutritional management of prediabetes and T2D to account for cultural differences in lifestyle, diet, and genetic factors. The initial evidence-based global tDNA template was designed for simplicity, flexibility, and cultural modification. This paper reports the Malaysian adaptation of the tDNA, which takes into account the epidemiologic, physiologic, cultural, and lifestyle factors unique to Malaysia, as well as the local guidelines recommendations. PMID:24385984

  19. Transcultural diabetes nutrition algorithm: a malaysian application.

    PubMed

    Hussein, Zanariah; Hamdy, Osama; Chin Chia, Yook; Lin Lim, Shueh; Kumari Natkunam, Santha; Hussain, Husni; Yeong Tan, Ming; Sulaiman, Ridzoni; Nisak, Barakatun; Chee, Winnie Siew Swee; Marchetti, Albert; Hegazi, Refaat A; Mechanick, Jeffrey I

    2013-01-01

    Glycemic control among patients with prediabetes and type 2 diabetes mellitus (T2D) in Malaysia is suboptimal, especially after the continuous worsening over the past decade. Improved glycemic control may be achieved through a comprehensive management strategy that includes medical nutrition therapy (MNT). Evidence-based recommendations for diabetes-specific therapeutic diets are available internationally. However, Asian patients with T2D, including Malaysians, have unique disease characteristics and risk factors, as well as cultural and lifestyle dissimilarities, which may render international guidelines and recommendations less applicable and/or difficult to implement. With these thoughts in mind, a transcultural Diabetes Nutrition Algorithm (tDNA) was developed by an international task force of diabetes and nutrition experts through the restructuring of international guidelines for the nutritional management of prediabetes and T2D to account for cultural differences in lifestyle, diet, and genetic factors. The initial evidence-based global tDNA template was designed for simplicity, flexibility, and cultural modification. This paper reports the Malaysian adaptation of the tDNA, which takes into account the epidemiologic, physiologic, cultural, and lifestyle factors unique to Malaysia, as well as the local guidelines recommendations.

  20. A flight management algorithm and guidance for fuel-conservative descents in a time-based metered air traffic environment: Development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1984-01-01

    A simple airborne flight management descent algorithm designed to define a flight profile subject to the constraints of using idle thrust, a clean airplane configuration (landing gear up, flaps zero, and speed brakes retracted), and fixed-time end conditions was developed and flight tested in the NASA TSRV B-737 research airplane. The research test flights, conducted in the Denver ARTCC automated time-based metering LFM/PD ATC environment, demonstrated that time guidance and control in the cockpit was acceptable to the pilots and ATC controllers and resulted in arrival of the airplane over the metering fix with standard deviations in airspeed error of 6.5 knots, in altitude error of 23.7 m (77.8 ft), and in arrival time accuracy of 12 sec. These accuracies indicated a good representation of airplane performance and wind modeling. Fuel savings will be obtained on a fleet-wide basis through a reduction of the time error dispersions at the metering fix and on a single-airplane basis by presenting the pilot with guidance for a fuel-efficient descent.

  1. Identifying Physician-Recognized Depression from Administrative Data: Consequences for Quality Measurement

    PubMed Central

    Spettell, Claire M; Wall, Terry C; Allison, Jeroan; Calhoun, Jaimee; Kobylinski, Richard; Fargason, Rachel; Kiefe, Catarina I

    2003-01-01

    Background Multiple factors limit identification of patients with depression from administrative data. However, administrative data drives many quality measurement systems, including the Health Plan Employer Data and Information Set (HEDIS®). Methods We investigated two algorithms for identification of physician-recognized depression. The study sample was drawn from primary care physician member panels of a large managed care organization. All members were continuously enrolled between January 1 and December 31, 1997. Algorithm 1 required at least two criteria in any combination: (1) an outpatient diagnosis of depression or (2) a pharmacy claim for an antidepressant. Algorithm 2 included the same criteria as algorithm 1, but required a diagnosis of depression for all patients. With algorithm 1, we identified the medical records of a stratified, random subset of patients with and without depression (n=465). We also identified patients of primary care physicians with a minimum of 10 depressed members by algorithm 1 (n=32,819) and algorithm 2 (n=6,837). Results The sensitivity, specificity, and positive predictive values were: Algorithm 1: 95 percent, 65 percent, 49 percent; Algorithm 2: 52 percent, 88 percent, 60 percent. Compared to algorithm 1, profiles from algorithm 2 revealed higher rates of follow-up visits (43 percent, 55 percent) and appropriate antidepressant dosage acutely (82 percent, 90 percent) and chronically (83 percent, 91 percent) (p<0.05 for all). Conclusions Both algorithms had high false positive rates. Denominator construction (algorithm 1 versus 2) contributed significantly to variability in measured quality. Our findings raise concern about interpreting depression quality reports based upon administrative data. PMID:12968818

  2. Algorithm Design of CPCI Backboard's Interrupts Management Based on VxWorks' Multi-Tasks

    NASA Astrophysics Data System (ADS)

    Cheng, Jingyuan; An, Qi; Yang, Junfeng

    2006-09-01

    This paper begins with a brief introduction of the embedded real-time operating system VxWorks and CompactPCI standard, then gives the programming interfaces of Peripheral Controller Interface (PCI) configuring, interrupts handling and multi-tasks programming interface under VxWorks, and then emphasis is placed on the software frameworks of CPCI interrupt management based on multi-tasks. This method is sound in design and easy to adapt, ensures that all possible interrupts are handled in time, which makes it suitable for data acquisition systems with multi-channels, a high data rate, and hard real-time high energy physics.

  3. Cloud-based mobility management in heterogeneous wireless networks

    NASA Astrophysics Data System (ADS)

    Kravchuk, Serhii; Minochkin, Dmytro; Omiotek, Zbigniew; Bainazarov, Ulan; Weryńska-Bieniasz, RóŻa; Iskakova, Aigul

    2017-08-01

    Mobility management is the key feature that supports the roaming of users between different systems. Handover is the essential aspect in the development of solutions supporting mobility scenarios. The handover process becomes more complex in a heterogeneous environment compared to the homogeneous one. Seamlessness and reduction of delay in servicing the handover calls, which can reduce the handover dropping probability, also require complex algorithms to provide a desired QoS for mobile users. A challenging problem to increase the scalability and availability of handover decision mechanisms is discussed. The aim of the paper is to propose cloud based handover as a service concept to cope with the challenges that arise.

  4. Near real-time, on-the-move software PED using VPEF

    NASA Astrophysics Data System (ADS)

    Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane

    2015-05-01

    The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.

  5. Updating the recommendations for treatment of tardive syndromes: A systematic review of new evidence and practical treatment algorithm.

    PubMed

    Bhidayasiri, Roongroj; Jitkritsadakul, Onanong; Friedman, Joseph H; Fahn, Stanley

    2018-06-15

    Management of tardive syndromes (TS) is challenging, with only a few evidence-based therapeutic algorithms reported in the American Academy of Neurology (AAN) guideline in 2013. To update the evidence-based recommendations and provide a practical treatment algorithm for management of TS by addressing 5 questions: 1) Is withdrawal of dopamine receptor blocking agents (DRBAs) an effective TS treatment? 2) Does switching from typical to atypical DRBAs reduce TS symptoms? 3) What is the efficacy of pharmacologic agents in treating TS? 4) Do patients with TS benefit from chemodenervation with botulinum toxin? 5) Do patients with TS benefit from surgical therapy? Systematic reviews were conducted by searching PsycINFO, Ovid MEDLINE, PubMed, EMBASE, Web of Science and Cochrane for articles published between 2012 and 2017 to identify new evidence published after the 2013 AAN guidelines. Articles were classified according to an AAN 4-tiered evidence-rating scheme. To the extent possible, for each study we attempted to categorize results based on the description of the population enrolled (tardive dyskinesia [TD], tardive dystonia, tardive tremor, etc.). Recommendations were based on the evidence. New evidence was combined with the existing guideline evidence to inform our recommendations. Deutetrabenazine and valbenazine are established as effective treatments of TD (Level A) and must be recommended as treatment. Clonazepam and Ginkgo biloba probably improve TD (Level B) and should be considered as treatment. Amantadine and tetrabenazine might be considered as TD treatment (Level C). Pallidal deep brain stimulation possibly improves TD and might be considered as a treatment for intractable TD (Level C). There is insufficient evidence to support or refute TS treatment by withdrawing causative agents or switching from typical to atypical DRBA (Level U). Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Modeling of geoelectric parameters for assessing groundwater potentiality in a multifaceted geologic terrain, Ipinsa Southwest, Nigeria - A GIS-based GODT approach

    NASA Astrophysics Data System (ADS)

    Mogaji, Kehinde Anthony; Omobude, Osayande Bright

    2017-12-01

    Modeling of groundwater potentiality zones is a vital scheme for effective management of groundwater resources. This study developed a new multi-criteria decision making algorithm for groundwater potentiality modeling through modifying the standard GOD model. The developed model christened as GODT model was applied to assess groundwater potential in a multi-faceted crystalline geologic terrain, southwestern, Nigeria using the derived four unify groundwater potential conditioning factors namely: Groundwater hydraulic confinement (G), aquifer Overlying strata resistivity (O), Depth to water table (D) and Thickness of aquifer (T) from the interpreted geophysical data acquired in the area. With the developed model algorithm, the GIS-based produced G, O, D and T maps were synthesized to estimate groundwater potential index (GWPI) values for the area. The estimated GWPI values were processed in GIS environment to produce groundwater potential prediction index (GPPI) map which demarcate the area into four potential zones. The produced GODT model-based GPPI map was validated through application of both correlation technique and spatial attribute comparative scheme (SACS). The performance of the GODT model was compared with that of the standard analytic hierarchy process (AHP) model. The correlation technique results established 89% regression coefficients for the GODT modeling algorithm compared with 84% for the AHP model. On the other hand, the SACS validation results for the GODT and AHP models are 72.5% and 65%, respectively. The overall results indicate that both models have good capability for predicting groundwater potential zones with the GIS-based GODT model as a good alternative. The GPPI maps produced in this study can form part of decision making model for environmental planning and groundwater management in the area.

  7. Resource efficient data compression algorithms for demanding, WSN based biomedical applications.

    PubMed

    Antonopoulos, Christos P; Voros, Nikolaos S

    2016-02-01

    During the last few years, medical research areas of critical importance such as Epilepsy monitoring and study, increasingly utilize wireless sensor network technologies in order to achieve better understanding and significant breakthroughs. However, the limited memory and communication bandwidth offered by WSN platforms comprise a significant shortcoming to such demanding application scenarios. Although, data compression can mitigate such deficiencies there is a lack of objective and comprehensive evaluation of relative approaches and even more on specialized approaches targeting specific demanding applications. The research work presented in this paper focuses on implementing and offering an in-depth experimental study regarding prominent, already existing as well as novel proposed compression algorithms. All algorithms have been implemented in a common Matlab framework. A major contribution of this paper, that differentiates it from similar research efforts, is the employment of real world Electroencephalography (EEG) and Electrocardiography (ECG) datasets comprising the two most demanding Epilepsy modalities. Emphasis is put on WSN applications, thus the respective metrics focus on compression rate and execution latency for the selected datasets. The evaluation results reveal significant performance and behavioral characteristics of the algorithms related to their complexity and the relative negative effect on compression latency as opposed to the increased compression rate. It is noted that the proposed schemes managed to offer considerable advantage especially aiming to achieve the optimum tradeoff between compression rate-latency. Specifically, proposed algorithm managed to combine highly completive level of compression while ensuring minimum latency thus exhibiting real-time capabilities. Additionally, one of the proposed schemes is compared against state-of-the-art general-purpose compression algorithms also exhibiting considerable advantages as far as the compression rate is concerned. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. A heuristic for efficient data distribution management in distributed simulation

    NASA Astrophysics Data System (ADS)

    Gupta, Pankaj; Guha, Ratan K.

    2005-05-01

    In this paper, we propose an algorithm for reducing the complexity of region matching and efficient multicasting in data distribution management component of High Level Architecture (HLA) Run Time Infrastructure (RTI). The current data distribution management (DDM) techniques rely on computing the intersection between the subscription and update regions. When a subscription region and an update region of different federates overlap, RTI establishes communication between the publisher and the subscriber. It subsequently routes the updates from the publisher to the subscriber. The proposed algorithm computes the update/subscription regions matching for dynamic allocation of multicast group. It provides new multicast routines that exploit the connectivity of federation by communicating updates regarding interactions and routes information only to those federates that require them. The region-matching problem in DDM reduces to clique-covering problem using the connections graph abstraction where the federations represent the vertices and the update/subscribe relations represent the edges. We develop an abstract model based on connection graph for data distribution management. Using this abstract model, we propose a heuristic for solving the region-matching problem of DDM. We also provide complexity analysis of the proposed heuristics.

  9. Application of Near-Surface Remote Sensing and computer algorithms in evaluating impacts of agroecosystem management on Zea mays (corn) phenological development in the Platte River - High Plains Aquifer Long Term Agroecosystem Research Network field sites.

    NASA Astrophysics Data System (ADS)

    Okalebo, J. A.; Das Choudhury, S.; Awada, T.; Suyker, A.; LeBauer, D.; Newcomb, M.; Ward, R.

    2017-12-01

    The Long-term Agroecosystem Research (LTAR) network is a USDA-ARS effort that focuses on conducting research that addresses current and emerging issues in agriculture related to sustainability and profitability of agroecosystems in the face of climate change and population growth. There are 18 sites across the USA covering key agricultural production regions. In Nebraska, a partnership between the University of Nebraska - Lincoln and ARD/USDA resulted in the establishment of the Platte River - High Plains Aquifer LTAR site in 2014. The site conducts research to sustain multiple ecosystem services focusing specifically on Nebraska's main agronomic production agroecosystems that comprise of abundant corn, soybeans, managed grasslands and beef production. As part of the national LTAR network, PR-HPA participates and contributes near-surface remotely sensed imagery of corn, soybean and grassland canopy phenology to the PhenoCam Network through high-resolution digital cameras. This poster highlights the application, advantages and usefulness of near-surface remotely sensed imagery in agroecosystem studies and management. It demonstrates how both Infrared and Red-Green-Blue imagery may be applied to monitor phenological events as well as crop abiotic stresses. Computer-based algorithms and analytic techniques proved very instrumental in revealing crop phenological changes such as green-up and tasseling in corn. This poster also reports the suitability and applicability of corn-derived computer based algorithms for evaluating phenological development of sorghum since both crops have similarities in their phenology; with sorghum panicles being similar to corn tassels. This later assessment was carried out using a sorghum dataset obtained from the Transportation Energy Resources from Renewable Agriculture Phenotyping Reference Platform project, Maricopa Agricultural Center, Arizona.

  10. Deconstructing Chronic Low Back Pain in the Older Adult-Step by Step Evidence and Expert-Based Recommendations for Evaluation and Treatment. Part VI: Lumbar Spinal Stenosis.

    PubMed

    Fritz, Julie M; Rundell, Sean D; Dougherty, Paul; Gentili, Angela; Kochersberger, Gary; Morone, Natalia E; Naga Raja, Srinivasa; Rodriguez, Eric; Rossi, Michelle I; Shega, Joseph; Sowa, Gwendolyn; Weiner, Debra K

    2016-03-01

    . To present the sixth in a series of articles designed to deconstruct chronic low back pain (CLBP) in older adults. This article focuses on the evaluation and management of lumbar spinal stenosis (LSS), the most common condition for which older adults undergo spinal surgery. . The evaluation and treatment algorithm, a table articulating the rationale for the individual algorithm components, and stepped-care drug recommendations were developed using a modified Delphi approach. The Principal Investigator, a five-member content expert panel and a nine-member primary care panel were involved in the iterative development of these materials. The illustrative clinical case was taken from the clinical practice of a contributor's colleague (SR). . We present an algorithm and supportive materials to help guide the care of older adults with LSS, a condition that occurs not uncommonly in those with CLBP. The case illustrates the importance of function-focused management and a rational approach to conservative care. . Lumbar spinal stenosis exists not uncommonly in older adults with CLBP and management often can be accomplished without surgery. Treatment should address all conditions in addition to LSS contributing to pain and disability. © 2016 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Treatment Algorithm for Chronic Idiopathic Constipation and Constipation-Predominant Irritable Bowel Syndrome Derived from a Canadian National Survey and Needs Assessment on Choices of Therapeutic Agents.

    PubMed

    Tse, Yvonne; Armstrong, David; Andrews, Christopher N; Bitton, Alain; Bressler, Brian; Marshall, John; Liu, Louis W C

    2017-01-01

    Background . Chronic idiopathic constipation (CIC) and constipation-predominant irritable bowel syndrome (IBS-C) are common functional lower gastrointestinal disorders that impair patients' quality of life. In a national survey, we aimed to evaluate (1) Canadian physician practice patterns in the utilization of therapeutic agents listed in the new ACG and AGA guidelines; (2) physicians satisfaction with these agents for their CIC and IBS-C patients; and (3) the usefulness of these new guidelines in their clinical practice. Methods . A 9-item questionnaire was sent to 350 Canadian specialists to evaluate their clinical practice for the management of CIC and IBS-C. Results . The response rate to the survey was 16% ( n = 55). Almost all (96%) respondents followed a standard, stepwise approach for management while they believed that only 24% of referring physicians followed the same approach. Respondents found guanylyl cyclase C (GCC) agonist most satisfying when treating their patients. Among the 69% of respondents who were aware of published guidelines, only 50% found them helpful in prioritizing treatment choices and 69% of respondents indicated that a treatment algorithm, applicable to Canadian practice, would be valuable. Conclusion . Based on this needs assessment, a treatment algorithm was developed to provide clinical guidance in the management of IBS-C and CIC in Canada.

  12. Supervisory Power Management Control Algorithms for Hybrid Electric Vehicles. A Survey

    DOE PAGES

    Malikopoulos, Andreas

    2014-03-31

    The growing necessity for environmentally benign hybrid propulsion systems has led to the development of advanced power management control algorithms to maximize fuel economy and minimize pollutant emissions. This paper surveys the control algorithms for hybrid electric vehicles (HEVs) and plug-in HEVs (PHEVs) that have been reported in the literature to date. The exposition ranges from parallel, series, and power split HEVs and PHEVs and includes a classification of the algorithms in terms of their implementation and the chronological order of their appearance. Remaining challenges and potential future research directions are also discussed.

  13. ASTER cloud coverage reassessment using MODIS cloud mask products

    NASA Astrophysics Data System (ADS)

    Tonooka, Hideyuki; Omagari, Kunjuro; Yamamoto, Hirokazu; Tachikawa, Tetsushi; Fujita, Masaru; Paitaer, Zaoreguli

    2010-10-01

    In the Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER) Project, two kinds of algorithms are used for cloud assessment in Level-1 processing. The first algorithm based on the LANDSAT-5 TM Automatic Cloud Cover Assessment (ACCA) algorithm is used for a part of daytime scenes observed with only VNIR bands and all nighttime scenes, and the second algorithm based on the LANDSAT-7 ETM+ ACCA algorithm is used for most of daytime scenes observed with all spectral bands. However, the first algorithm does not work well for lack of some spectral bands sensitive to cloud detection, and the two algorithms have been less accurate over snow/ice covered areas since April 2008 when the SWIR subsystem developed troubles. In addition, they perform less well for some combinations of surface type and sun elevation angle. We, therefore, have developed the ASTER cloud coverage reassessment system using MODIS cloud mask (MOD35) products, and have reassessed cloud coverage for all ASTER archived scenes (>1.7 million scenes). All of the new cloud coverage data are included in Image Management System (IMS) databases of the ASTER Ground Data System (GDS) and NASA's Land Process Data Active Archive Center (LP DAAC) and used for ASTER product search by users, and cloud mask images are distributed to users through Internet. Daily upcoming scenes (about 400 scenes per day) are reassessed and inserted into the IMS databases in 5 to 7 days after each scene observation date. Some validation studies for the new cloud coverage data and some mission-related analyses using those data are also demonstrated in the present paper.

  14. Dynamic Staffing and Rescheduling in Software Project Management: A Hybrid Approach.

    PubMed

    Ge, Yujia; Xu, Bin

    2016-01-01

    Resource allocation could be influenced by various dynamic elements, such as the skills of engineers and the growth of skills, which requires managers to find an effective and efficient tool to support their staffing decision-making processes. Rescheduling happens commonly and frequently during the project execution. Control options have to be made when new resources are added or tasks are changed. In this paper we propose a software project staffing model considering dynamic elements of staff productivity with a Genetic Algorithm (GA) and Hill Climbing (HC) based optimizer. Since a newly generated reschedule dramatically different from the initial schedule could cause an obvious shifting cost increase, our rescheduling strategies consider both efficiency and stability. The results of real world case studies and extensive simulation experiments show that our proposed method is effective and could achieve comparable performance to other heuristic algorithms in most cases.

  15. A study of the parallel algorithm for large-scale DC simulation of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel

    Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.

  16. Dynamic Staffing and Rescheduling in Software Project Management: A Hybrid Approach

    PubMed Central

    Ge, Yujia; Xu, Bin

    2016-01-01

    Resource allocation could be influenced by various dynamic elements, such as the skills of engineers and the growth of skills, which requires managers to find an effective and efficient tool to support their staffing decision-making processes. Rescheduling happens commonly and frequently during the project execution. Control options have to be made when new resources are added or tasks are changed. In this paper we propose a software project staffing model considering dynamic elements of staff productivity with a Genetic Algorithm (GA) and Hill Climbing (HC) based optimizer. Since a newly generated reschedule dramatically different from the initial schedule could cause an obvious shifting cost increase, our rescheduling strategies consider both efficiency and stability. The results of real world case studies and extensive simulation experiments show that our proposed method is effective and could achieve comparable performance to other heuristic algorithms in most cases. PMID:27285420

  17. Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domino, Stefan P.; Ananthan, Shreyas; Knaus, Robert C.

    The former Nalu interior heterogeneous algorithm design, which was originally designed to manage matrix assembly operations over all elemental topology types, has been modified to operate over homogeneous collections of mesh entities. This newly templated kernel design allows for removal of workset variable resize operations that were formerly required at each loop over a Sierra ToolKit (STK) bucket (nominally, 512 entities in size). Extensive usage of the Standard Template Library (STL) std::vector has been removed in favor of intrinsic Kokkos memory views. In this milestone effort, the transition to Kokkos as the underlying infrastructure to support performance and portability onmore » many-core architectures has been deployed for key matrix algorithmic kernels. A unit-test driven design effort has developed a homogeneous entity algorithm that employs a team-based thread parallelism construct. The STK Single Instruction Multiple Data (SIMD) infrastructure is used to interleave data for improved vectorization. The collective algorithm design, which allows for concurrent threading and SIMD management, has been deployed for the core low-Mach element- based algorithm. Several tests to ascertain SIMD performance on Intel KNL and Haswell architectures have been carried out. The performance test matrix includes evaluation of both low- and higher-order methods. The higher-order low-Mach methodology builds on polynomial promotion of the core low-order control volume nite element method (CVFEM). Performance testing of the Kokkos-view/SIMD design indicates low-order matrix assembly kernel speed-up ranging between two and four times depending on mesh loading and node count. Better speedups are observed for higher-order meshes (currently only P=2 has been tested) especially on KNL. The increased workload per element on higher-order meshes bene ts from the wide SIMD width on KNL machines. Combining multiple threads with SIMD on KNL achieves a 4.6x speedup over the baseline, with assembly timings faster than that observed on Haswell architecture. The computational workload of higher-order meshes, therefore, seems ideally suited for the many-core architecture and justi es further exploration of higher-order on NGP platforms. A Trilinos/Tpetra-based multi-threaded GMRES preconditioned by symmetric Gauss Seidel (SGS) represents the core solver infrastructure for the low-Mach advection/diffusion implicit solves. The threaded solver stack has been tested on small problems on NREL's Peregrine system using the newly developed and deployed Kokkos-view/SIMD kernels. fforts are underway to deploy the Tpetra-based solver stack on NERSC Cori system to benchmark its performance at scale on KNL machines.« less

  18. Burned area detection based on Landsat time series in savannas of southern Burkina Faso

    NASA Astrophysics Data System (ADS)

    Liu, Jinxiu; Heiskanen, Janne; Maeda, Eduardo Eiji; Pellikka, Petri K. E.

    2018-02-01

    West African savannas are subject to regular fires, which have impacts on vegetation structure, biodiversity and carbon balance. An efficient and accurate mapping of burned area associated with seasonal fires can greatly benefit decision making in land management. Since coarse resolution burned area products cannot meet the accuracy needed for fire management and climate modelling at local scales, the medium resolution Landsat data is a promising alternative for local scale studies. In this study, we developed an algorithm for continuous monitoring of annual burned areas using Landsat time series. The algorithm is based on burned pixel detection using harmonic model fitting with Landsat time series and breakpoint identification in the time series data. This approach was tested in a savanna area in southern Burkina Faso using 281 images acquired between October 2000 and April 2016. An overall accuracy of 79.2% was obtained with balanced omission and commission errors. This represents a significant improvement in comparison with MODIS burned area product (67.6%), which had more omission errors than commission errors, indicating underestimation of the total burned area. By observing the spatial distribution of burned areas, we found that the Landsat based method misclassified cropland and cloud shadows as burned areas due to the similar spectral response, and MODIS burned area product omitted small and fragmented burned areas. The proposed algorithm is flexible and robust against decreased data availability caused by clouds and Landsat 7 missing lines, therefore having a high potential for being applied in other landscapes in future studies.

  19. Impact of location on outcome after penetrating colon injuries.

    PubMed

    Sharpe, John P; Magnotti, Louis J; Weinberg, Jordan A; Zarzaur, Ben L; Shahan, Charles P; Parks, Nancy A; Fabian, Timothy C; Croce, Martin A

    2012-12-01

    Most studies examining suture line failure after penetrating colon injuries have focused on right- versus left-sided injuries. In our institution, operative decisions (resection plus anastomosis vs. diversion) are based on a defined management algorithm regardless of injury location. The purpose of this study was to evaluate the effect of injury location on outcomes after penetrating colon injuries. Consecutive patients with full thickness penetrating colon injuries for 13 years were stratified by age, injury location and mechanism, and severity of shock. According to the algorithm, patients with nondestructive injuries underwent primary repair. Destructive wounds underwent resection plus anastomosis in the absence of comorbidities or large preoperative or intraoperative transfusion requirements (>6 U of packed red blood cells); otherwise, they were diverted. Injury location was defined as ascending, transverse, descending (including splenic flexure), and sigmoid. Multivariable logistic regression was performed to determine whether injury location was an independent predictor of either morbidity or mortality. Four hundred sixty-nine patients were identified: 314 (67%) underwent primary repair and 155 (33%) underwent resection. Most injuries involved the transverse colon (39%), followed by the ascending colon (26%), the descending colon (21%), and the sigmoid colon (14%). Overall, there were 13 suture line failures (3%) and 72 abscesses (15%). Most suture line failures involved injuries to the descending colon (p = 0.06), whereas most abscesses followed injuries to the ascending colon (p = 0.37). Multivariable logistic regression failed to identify injury location as an independent predictor of either morbidity or mortality after adjusting for 24-hour transfusions, base excess, shock index, injury mechanism, and operative management. Injury location did not affect morbidity or mortality after penetrating colon injuries. Nondestructive injuries should be primarily repaired. For destructive injuries, operative decisions based on a defined algorithm rather than injury location achieves an acceptably low morbidity and mortality rate and simplifies management. Prognostic study, level III.

  20. Clinically oriented device programming in bradycardia patients: part 1 (sinus node disease). Proposals from AIAC (Italian Association of Arrhythmology and Cardiac Pacing).

    PubMed

    Ziacchi, Matteo; Palmisano, Pietro; Biffi, Mauro; Ricci, Renato P; Landolina, Maurizio; Zoni-Berisso, Massimo; Occhetta, Eraldo; Maglia, Giampiero; Botto, Gianluca; Padeletti, Luigi; Boriani, Giuseppe

    2018-04-01

    : Modern pacemakers have an increasing number of programable parameters and specific algorithms designed to optimize pacing therapy in relation to the individual characteristics of patients. When choosing the most appropriate pacemaker type and programing, the following variables must be taken into account: the type of bradyarrhythmia at the time of pacemaker implantation; the cardiac chamber requiring pacing, and the percentage of pacing actually needed to correct the rhythm disorder; the possible association of multiple rhythm disturbances and conduction diseases; the evolution of conduction disorders during follow-up. The goals of device programing are to preserve or restore the heart rate response to metabolic and hemodynamic demands; to maintain physiological conduction; to maximize device longevity; to detect, prevent, and treat atrial arrhythmia. In patients with sinus node disease, the optimal pacing mode is DDDR. Based on all the available evidence, in this setting, we consider appropriate the activation of the following algorithms: rate responsive function in patients with chronotropic incompetence; algorithms to maximize intrinsic atrioventricular conduction in the absence of atrioventricular blocks; mode-switch algorithms; algorithms for autoadaptive management of the atrial pacing output; algorithms for the prevention and treatment of atrial tachyarrhythmias in the subgroup of patients with atrial tachyarrhythmias/atrial fibrillation. The purpose of this two-part consensus document is to provide specific suggestions (based on an extensive literature review) on appropriate pacemaker setting in relation to patients' clinical features.

  1. An approach for management of geometry data

    NASA Technical Reports Server (NTRS)

    Dube, R. P.; Herron, G. J.; Schweitzer, J. E.; Warkentine, E. R.

    1980-01-01

    The strategies for managing Integrated Programs for Aerospace Design (IPAD) computer-based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. IPAD's data base system makes this information available to all authorized departments in a company. A discussion of the data structures and algorithms required to support geometry in IPIP (IPAD's data base management system) is presented. Through the use of IPIP's data definition language, the structure of the geometry components is defined. The data manipulation language is the vehicle by which a user defines an instance of the geometry. The manipulation language also allows a user to edit, query, and manage the geometry. The selection of canonical forms is a very important part of the IPAD geometry. IPAD has a canonical form for each entity and provides transformations to alternate forms; in particular, IPAD will provide a transformation to the ANSI standard. The DBMS schemas required to support IPAD geometry are explained.

  2. HDL Based FPGA Interface Library for Data Acquisition and Multipurpose Real Time Algorithms

    NASA Astrophysics Data System (ADS)

    Fernandes, Ana M.; Pereira, R. C.; Sousa, J.; Batista, A. J. N.; Combo, A.; Carvalho, B. B.; Correia, C. M. B. A.; Varandas, C. A. F.

    2011-08-01

    The inherent parallelism of the logic resources, the flexibility in its configuration and the performance at high processing frequencies makes the field programmable gate array (FPGA) the most suitable device to be used both for real time algorithm processing and data transfer in instrumentation modules. Moreover, the reconfigurability of these FPGA based modules enables exploiting different applications on the same module. When using a reconfigurable module for various applications, the availability of a common interface library for easier implementation of the algorithms on the FPGA leads to more efficient development. The FPGA configuration is usually specified in a hardware description language (HDL) or other higher level descriptive language. The critical paths, such as the management of internal hardware clocks that require deep knowledge of the module behavior shall be implemented in HDL to optimize the timing constraints. The common interface library should include these critical paths, freeing the application designer from hardware complexity and able to choose any of the available high-level abstraction languages for the algorithm implementation. With this purpose a modular Verilog code was developed for the Virtex 4 FPGA of the in-house Transient Recorder and Processor (TRP) hardware module, based on the Advanced Telecommunications Computing Architecture (ATCA), with eight channels sampling at up to 400 MSamples/s (MSPS). The TRP was designed to perform real time Pulse Height Analysis (PHA), Pulse Shape Discrimination (PSD) and Pile-Up Rejection (PUR) algorithms at a high count rate (few Mevent/s). A brief description of this modular code is presented and examples of its use as an interface with end user algorithms, including a PHA with PUR, are described.

  3. Synchronization Algorithms for Co-Simulation of Power Grid and Communication Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciraci, Selim; Daily, Jeffrey A.; Agarwal, Khushbu

    2014-09-11

    The ongoing modernization of power grids consists of integrating them with communication networks in order to achieve robust and resilient control of grid operations. To understand the operation of the new smart grid, one approach is to use simulation software. Unfortunately, current power grid simulators at best utilize inadequate approximations to simulate communication networks, if at all. Cooperative simulation of specialized power grid and communication network simulators promises to more accurately reproduce the interactions of real smart grid deployments. However, co-simulation is a challenging problem. A co-simulation must manage the exchange of informa- tion, including the synchronization of simulator clocks,more » between all simulators while maintaining adequate computational perfor- mance. This paper describes two new conservative algorithms for reducing the overhead of time synchronization, namely Active Set Conservative and Reactive Conservative. We provide a detailed analysis of their performance characteristics with respect to the current state of the art including both conservative and optimistic synchronization algorithms. In addition, we provide guidelines for selecting the appropriate synchronization algorithm based on the requirements of the co-simulation. The newly proposed algorithms are shown to achieve as much as 14% and 63% im- provement, respectively, over the existing conservative algorithm.« less

  4. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  5. JOURNAL CLUB: Plagiarism in Manuscripts Submitted to the AJR: Development of an Optimal Screening Algorithm and Management Pathways.

    PubMed

    Taylor, Donna B

    2017-04-01

    The objective of this study was to investigate the incidence of plagiarism in a sample of manuscripts submitted to the AJR using CrossCheck, develop an algorithm to identify significant plagiarism, and formulate management pathways. A sample of 110 of 1610 (6.8%) manuscripts submitted to AJR in 2014 in the categories of Original Research or Review were analyzed using CrossCheck and manual assessment. The overall similarity index (OSI), highest similarity score from a single source, whether duplication was from single or multiple origins, journal section, and presence or absence of referencing the source were recorded. The criteria outlined by the International Committee of Medical Journal Editors were the reference standard for identifying manuscripts containing plagiarism. Statistical analysis was used to develop a screening algorithm to maximize sensitivity and specificity for the detection of plagiarism. Criteria for defining the severity of plagiarism and management pathways based on the severity of the plagiarism were determined. Twelve manuscripts (10.9%) contained plagiarism. Nine had an OSI excluding quotations and references of less than 20%. In seven, the highest similarity score from a single source was less than 10%. The highest similarity score from a single source was the work of the same author or authors in nine. Common sections for duplication were the Materials and Methods, Discussion, and abstract. Referencing the original source was lacking in 11. Plagiarism was undetected at submission in five of these 12 articles; two had been accepted for publication. The most effective screening algorithm was to average the OSI including quotations and references and the highest similarity score from a single source and to submit manuscripts with an average value of more than 12% for further review. The current methods for detecting plagiarism are suboptimal. A new screening algorithm is proposed.

  6. Dynamic Power Distribution System Management With a Locally Connected Communication Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhang, Kaiqing; Basar, Tamer

    Coordinated optimization and control of distribution-level assets can enable a reliable and optimal integration of massive amount of distributed energy resources (DERs) and facilitate distribution system management (DSM). Accordingly, the objective is to coordinate the power injection at the DERs to maintain certain quantities across the network, e.g., voltage magnitude, line flows, or line losses, to be close to a desired profile. By and large, the performance of the DSM algorithms has been challenged by two factors: i) the possibly non-strongly connected communication network over DERs that hinders the coordination; ii) the dynamics of the real system caused by themore » DERs with heterogeneous capabilities, time-varying operating conditions, and real-time measurement mismatches. In this paper, we investigate the modeling and algorithm design and analysis with the consideration of these two factors. In particular, a game theoretic characterization is first proposed to account for a locally connected communication network over DERs, along with the analysis of the existence and uniqueness of the Nash equilibrium (NE) therein. To achieve the equilibrium in a distributed fashion, a projected-gradient-based asynchronous DSM algorithm is then advocated. The algorithm performance, including the convergence speed and the tracking error, is analytically guaranteed under the dynamic setting. Extensive numerical tests on both synthetic and realistic cases corroborate the analytical results derived.« less

  7. Design Principles and Algorithms for Air Traffic Arrival Scheduling

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Itoh, Eri

    2014-01-01

    This report presents design principles and algorithms for building a real-time scheduler of arrival aircraft based on a first-come-first-served (FCFS) scheduling protocol. The algorithms provide the conceptual and computational foundation for the Traffic Management Advisor (TMA) of the Center/terminal radar approach control facilities (TRACON) automation system, which comprises a set of decision support tools for managing arrival traffic at major airports in the United States. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high-altitude airspace far away from the airport and low-altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time. This report is a revision of an earlier paper first presented as part of an Advisory Group for Aerospace Research and Development (AGARD) lecture series in September 1995. The authors, during vigorous discussions over the details of this paper, felt it was important to the air-trafficmanagement (ATM) community to revise and extend the original 1995 paper, providing more detail and clarity and thereby allowing future researchers to understand this foundational work as the basis for the TMA's scheduling algorithms.

  8. Assessment and management of fracture risk in patients with Parkinson's disease.

    PubMed

    Lyell, Veronica; Henderson, Emily; Devine, Mark; Gregson, Celia

    2015-01-01

    Parkinson's disease (PD) is associated with substantially increased fracture risk, particularly hip fracture, which can occur relatively early in the course of PD. Despite this, current national clinical guidelines for PD fail to adequately address fracture risk assessment or the management of bone health. We appraise the evidence supporting bone health management in PD and propose a PD-specific algorithm for the fracture risk assessment and the management of bone health in patients with PD and related movement disorders. The algorithm considers (i) calcium and vitamin D replacement and maintenance, (ii) quantification of prior falls and fractures, (iii) calculation of 10-year major osteoporotic and hip fracture risks using Qfracture, (iv) application of fracture risk thresholds, which if fracture risk is high (v) prompts anti-resorptive treatment, with or without dual X-ray absorptiometry, and if low (vi) prompts re-assessment with FRAX and application of National Osteoporosis Guidelines Group (NOGG) guidance. A range of anti-resorptive agents are now available to treat osteoporosis; we review their use from the specific perspective of a clinician managing a patient population with PD. In conclusion, our current evidence base supports updating of guidelines globally concerning the management of PD, which presently fail to adequately address bone health. © The Author 2014. Published by Oxford University Press on behalf of the British Geriatrics Society. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Merits and limitations of the mode switching rate stabilization pacing algorithms in the implantable cardioverter defibrillator.

    PubMed

    Dijkman, B; Wellens, H J

    2001-09-01

    The 7250 Jewel AF Medtronic model of ICD is the first implantable device in which both therapies for atrial arrhythmias and pacing algorithms for atrial arrhythmia prevention are available. Feasibility of that extensive atrial arrhythmia management requires correct and synergic functioning of different algorithms to control arrhythmias. The ability of the new pacing algorithms to stabilize the atrial rate following termination of treated atrial arrhythmias was evaluated in the marker channel registration of 600 spontaneously occurring episodes in 15 patients with the Jewel AF. All patients (55+/-15 years) had structural heart disease and documented atrial and ventricular arrhythmias. Dual chamber rate stabilization pacing was present in 245 (41 %) of episodes following arrhythmia termination and was a part of the mode switching operation during which pacing was provided in the dynamic DDI mode. This algorithm could function as the atrial rate stabilization pacing only when there was a slow spontaneous atrial rhythm or in presence of atrial premature beats conducted to the ventricles with a normal AV time. In case of atrial premature beats with delayed or absent conduction to the ventricles and in case of ventricular premature beats, the algorithm stabilized the ventricular rate. The rate stabilization pacing in DDI mode during sinus rhythm following atrial arrhythmia termination was often extended in time due to the device-based definition of arrhythmia termination. This was also the case in patients, in whom the DDD mode with true atrial rate stabilization algorithm was programmed. The rate stabilization algorithms in the Jewel AF applied after atrial arrhythmia termination provide pacing that is not based on the timing of atrial events. Only under certain circumstances the algorithm can function as atrial rate stabilization pacing. Adjustments in availability and functioning of the rate stabilization algorithms might be of benefit for the clinical performance of pacing as part of device therapy for atrial arrhythmias.

  10. Approximating the 0-1 Multiple Knapsack Problem with Agent Decomposition and Market Negotiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smolinski, B.

    The 0-1 multiple knapsack problem appears in many domains from financial portfolio management to cargo ship stowing. Methods for solving it range from approximate algorithms, such as greedy algorithms, to exact algorithms, such as branch and bound. Approximate algorithms have no bounds on how poorly they perform and exact algorithms can suffer from exponential time and space complexities with large data sets. This paper introduces a market model based on agent decomposition and market auctions for approximating the 0-1 multiple knapsack problem, and an algorithm that implements the model (M(x)). M(x) traverses the solution space rather than getting caught inmore » a local maximum, overcoming an inherent problem of many greedy algorithms. The use of agents ensures that infeasible solutions are not considered while traversing the solution space and that traversal of the solution space is not just random, but is also directed. M(x) is compared to a bound and bound algorithm (BB) and a simple greedy algorithm with a random shuffle (G(x)). The results suggest that M(x) is a good algorithm for approximating the 0-1 Multiple Knapsack problem. M(x) almost always found solutions that were close to optimal in a fraction of the time it took BB to run and with much less memory on large test data sets. M(x) usually performed better than G(x) on hard problems with correlated data.« less

  11. Advanced Avionics Verification and Validation Phase II (AAV&V-II)

    DTIC Science & Technology

    1999-01-01

    Algorithm 2-8 2.7 The Weak Control Dependence Algorithm 2-8 2.8 The Indirect Dependence Algorithms 2-9 2.9 Improvements to the Pleiades Object...describes some modifications made to the Pleiades object management system to increase the speed of the analysis. 2.1 THE INTERPROCEDURAL CONTROL FLOW...slow as the edges in the graph increased. The time to insert edges was addressed by enhancements to the Pleiades object management system, which are

  12. Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework

    NASA Astrophysics Data System (ADS)

    Wang, C.; Hu, F.; Sha, D.; Han, X.

    2017-10-01

    Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  13. Hybridization of Strength Pareto Multiobjective Optimization with Modified Cuckoo Search Algorithm for Rectangular Array

    NASA Astrophysics Data System (ADS)

    Abdul Rani, Khairul Najmy; Abdulmalek, Mohamedfareq; A. Rahim, Hasliza; Siew Chin, Neoh; Abd Wahab, Alawiyah

    2017-04-01

    This research proposes the various versions of modified cuckoo search (MCS) metaheuristic algorithm deploying the strength Pareto evolutionary algorithm (SPEA) multiobjective (MO) optimization technique in rectangular array geometry synthesis. Precisely, the MCS algorithm is proposed by incorporating the Roulette wheel selection operator to choose the initial host nests (individuals) that give better results, adaptive inertia weight to control the positions exploration of the potential best host nests (solutions), and dynamic discovery rate to manage the fraction probability of finding the best host nests in 3-dimensional search space. In addition, the MCS algorithm is hybridized with the particle swarm optimization (PSO) and hill climbing (HC) stochastic techniques along with the standard strength Pareto evolutionary algorithm (SPEA) forming the MCSPSOSPEA and MCSHCSPEA, respectively. All the proposed MCS-based algorithms are examined to perform MO optimization on Zitzler-Deb-Thiele’s (ZDT’s) test functions. Pareto optimum trade-offs are done to generate a set of three non-dominated solutions, which are locations, excitation amplitudes, and excitation phases of array elements, respectively. Overall, simulations demonstrates that the proposed MCSPSOSPEA outperforms other compatible competitors, in gaining a high antenna directivity, small half-power beamwidth (HPBW), low average side lobe level (SLL) suppression, and/or significant predefined nulls mitigation, simultaneously.

  14. Computer based interpretation of infrared spectra-structure of the knowledge-base, automatic rule generation and interpretation

    NASA Astrophysics Data System (ADS)

    Ehrentreich, F.; Dietze, U.; Meyer, U.; Abbas, S.; Schulz, H.

    1995-04-01

    It is a main task within the SpecInfo-Project to develop interpretation tools that can handle a great deal more of the complicated, more specific spectrum-structure-correlations. In the first step the empirical knowledge about the assignment of structural groups and their characteristic IR-bands has been collected from literature and represented in a computer readable well-structured form. Vague, verbal rules are managed by introduction of linguistic variables. The next step was the development of automatic rule generating procedures. We had combined and enlarged the IDIOTS algorithm with the algorithm by Blaffert relying on set theory. The procedures were successfully applied to the SpecInfo database. The realization of the preceding items is a prerequisite for the improvement of the computerized structure elucidation procedure.

  15. Vehicle Routing Problem Using Genetic Algorithm with Multi Compartment on Vegetable Distribution

    NASA Astrophysics Data System (ADS)

    Kurnia, Hari; Gustri Wahyuni, Elyza; Cergas Pembrani, Elang; Gardini, Syifa Tri; Kurnia Aditya, Silfa

    2018-03-01

    The problem that is often gained by the industries of managing and distributing vegetables is how to distribute vegetables so that the quality of the vegetables can be maintained properly. The problems encountered include optimal route selection and little travel time or so-called TSP (Traveling Salesman Problem). These problems can be modeled using the Vehicle Routing Problem (VRP) algorithm with rating ranking, a cross order based crossing, and also order based mutation mutations on selected chromosomes. This study uses limitations using only 20 market points, 2 point warehouse (multi compartment) and 5 vehicles. It is determined that for one distribution, one vehicle can only distribute to 4 market points only from 1 particular warehouse, and also one such vehicle can only accommodate 100 kg capacity.

  16. C4I Community of Interest C2 Roadmap

    DTIC Science & Technology

    2015-03-24

    QoS -based services – Digital policy-based prioritization – Dynamic bandwidth allocation – Automated network management April 15 Slide 9...Co-Site Mitigation) NC-3 • LPD/LPI Comms NC-4 • Increased Range NC-7 • Increased Loss Tolerance & Recovery NC-7 • Mobile Ad Hoc Networking NC-8...Algorithms and Software • Systems and Processes Networks and Communications • Radios and Apertures • Networks • Information April 15 Slide 8

  17. Emergency Department Management of Suspected Calf-Vein Deep Venous Thrombosis: A Diagnostic Algorithm

    PubMed Central

    Kitchen, Levi; Lawrence, Matthew; Speicher, Matthew; Frumkin, Kenneth

    2016-01-01

    Introduction Unilateral leg swelling with suspicion of deep venous thrombosis (DVT) is a common emergency department (ED) presentation. Proximal DVT (thrombus in the popliteal or femoral veins) can usually be diagnosed and treated at the initial ED encounter. When proximal DVT has been ruled out, isolated calf-vein deep venous thrombosis (IC-DVT) often remains a consideration. The current standard for the diagnosis of IC-DVT is whole-leg vascular duplex ultrasonography (WLUS), a test that is unavailable in many hospitals outside normal business hours. When WLUS is not available from the ED, recommendations for managing suspected IC-DVT vary. The objectives of the study is to use current evidence and recommendations to (1) propose a diagnostic algorithm for IC-DVT when definitive testing (WLUS) is unavailable; and (2) summarize the controversy surrounding IC-DVT treatment. Discussion The Figure combines D-dimer testing with serial CUS or a single deferred FLUS for the diagnosis of IC-DVT. Such an algorithm has the potential to safely direct the management of suspected IC-DVT when definitive testing is unavailable. Whether or not to treat diagnosed IC-DVT remains widely debated and awaiting further evidence. Conclusion When IC-DVT is not ruled out in the ED, the suggested algorithm, although not prospectively validated by a controlled study, offers an approach to diagnosis that is consistent with current data and recommendations. When IC-DVT is diagnosed, current references suggest that a decision between anticoagulation and continued follow-up outpatient testing can be based on shared decision-making. The risks of proximal progression and life-threatening embolization should be balanced against the generally more benign natural history of such thrombi, and an individual patient’s risk factors for both thrombus propagation and complications of anticoagulation. PMID:27429688

  18. Emergency Department Management of Suspected Calf-Vein Deep Venous Thrombosis: A Diagnostic Algorithm.

    PubMed

    Kitchen, Levi; Lawrence, Matthew; Speicher, Matthew; Frumkin, Kenneth

    2016-07-01

    Unilateral leg swelling with suspicion of deep venous thrombosis (DVT) is a common emergency department (ED) presentation. Proximal DVT (thrombus in the popliteal or femoral veins) can usually be diagnosed and treated at the initial ED encounter. When proximal DVT has been ruled out, isolated calf-vein deep venous thrombosis (IC-DVT) often remains a consideration. The current standard for the diagnosis of IC-DVT is whole-leg vascular duplex ultrasonography (WLUS), a test that is unavailable in many hospitals outside normal business hours. When WLUS is not available from the ED, recommendations for managing suspected IC-DVT vary. The objectives of the study is to use current evidence and recommendations to (1) propose a diagnostic algorithm for IC-DVT when definitive testing (WLUS) is unavailable; and (2) summarize the controversy surrounding IC-DVT treatment. The Figure combines D-dimer testing with serial CUS or a single deferred FLUS for the diagnosis of IC-DVT. Such an algorithm has the potential to safely direct the management of suspected IC-DVT when definitive testing is unavailable. Whether or not to treat diagnosed IC-DVT remains widely debated and awaiting further evidence. When IC-DVT is not ruled out in the ED, the suggested algorithm, although not prospectively validated by a controlled study, offers an approach to diagnosis that is consistent with current data and recommendations. When IC-DVT is diagnosed, current references suggest that a decision between anticoagulation and continued follow-up outpatient testing can be based on shared decision-making. The risks of proximal progression and life-threatening embolization should be balanced against the generally more benign natural history of such thrombi, and an individual patient's risk factors for both thrombus propagation and complications of anticoagulation.

  19. A Simple Two Aircraft Conflict Resolution Algorithm

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.

    2006-01-01

    Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in, the cockpit, dispatchers in operation control centers sad and traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control functions. This paper describes a conflict detection, and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm, which is often used for missile guidance during the terminal phase. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection, and the conflict resolution methods.

  20. Management and Analysis of Radiation Portal Monitor Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowe, Nathan C; Alcala, Scott; Crye, Jason Michael

    2014-01-01

    Oak Ridge National Laboratory (ORNL) receives, archives, and analyzes data from radiation portal monitors (RPMs). Over time the amount of data submitted for analysis has grown significantly, and in fiscal year 2013, ORNL received 545 gigabytes of data representing more than 230,000 RPM operating days. This data comes from more than 900 RPMs. ORNL extracts this data into a relational database, which is accessed through a custom software solution called the Desktop Analysis and Reporting Tool (DART). DART is used by data analysts to complete a monthly lane-by-lane review of RPM status. Recently ORNL has begun to extend its datamore » analysis based on program-wide data processing in addition to the lane-by-lane review. Program-wide data processing includes the use of classification algorithms designed to identify RPMs with specific known issues and clustering algorithms intended to identify as-yet-unknown issues or new methods and measures for use in future classification algorithms. This paper provides an overview of the architecture used in the management of this data, performance aspects of the system, and additional requirements and methods used in moving toward an increased program-wide analysis paradigm.« less

Top