Sample records for agv assignment algorithm

  1. Study on store-space assignment based on logistic AGV in e-commerce goods to person picking pattern

    NASA Astrophysics Data System (ADS)

    Xu, Lijuan; Zhu, Jie

    2017-10-01

    This paper studied on the store-space assignment based on logistic AGV in E-commerce goods to person picking pattern, and established the store-space assignment model based on the lowest picking cost, and design for store-space assignment algorithm after the cluster analysis based on similarity coefficient. And then through the example analysis, compared the picking cost between store-space assignment algorithm this paper design and according to item number and storage according to ABC classification allocation, and verified the effectiveness of the design of the store-space assignment algorithm.

  2. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization.

    PubMed

    Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah

    2017-01-01

    Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.

  3. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization

    PubMed Central

    Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah

    2017-01-01

    Flexible manufacturing system (FMS) enhances the firm’s flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs’ battery charge. Assessment of the numerical examples’ scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software. PMID:28263994

  4. Dynamic Task Assignment of Autonomous Distributed AGV in an Intelligent FMS Environment

    NASA Astrophysics Data System (ADS)

    Fauadi, Muhammad Hafidz Fazli Bin Md; Lin, Hao Wen; Murata, Tomohiro

    The need of implementing distributed system is growing significantly as it is proven to be effective for organization to be flexible against a highly demanding market. Nevertheless, there are still large technical gaps need to be addressed to gain significant achievement. We propose a distributed architecture to control Automated Guided Vehicle (AGV) operation based on multi-agent architecture. System architectures and agents' functions have been designed to support distributed control of AGV. Furthermore, enhanced agent communication protocol has been configured to accommodate dynamic attributes of AGV task assignment procedure. Result proved that the technique successfully provides a better solution.

  5. Simultaneous Scheduling of Jobs, AGVs and Tools Considering Tool Transfer Times in Multi Machine FMS By SOS Algorithm

    NASA Astrophysics Data System (ADS)

    Sivarami Reddy, N.; Ramamurthy, D. V., Dr.; Prahlada Rao, K., Dr.

    2017-08-01

    This article addresses simultaneous scheduling of machines, AGVs and tools where machines are allowed to share the tools considering transfer times of jobs and tools between machines, to generate best optimal sequences that minimize makespan in a multi-machine Flexible Manufacturing System (FMS). Performance of FMS is expected to improve by effective utilization of its resources, by proper integration and synchronization of their scheduling. Symbiotic Organisms Search (SOS) algorithm is a potent tool which is a better alternative for solving optimization problems like scheduling and proven itself. The proposed SOS algorithm is tested on 22 job sets with makespan as objective for scheduling of machines and tools where machines are allowed to share tools without considering transfer times of jobs and tools and the results are compared with the results of existing methods. The results show that the SOS has outperformed. The same SOS algorithm is used for simultaneous scheduling of machines, AGVs and tools where machines are allowed to share tools considering transfer times of jobs and tools to determine the best optimal sequences that minimize makespan.

  6. A Clonal Selection Algorithm for Minimizing Distance Travel and Back Tracking of Automatic Guided Vehicles in Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Chawla, Viveak Kumar; Chanda, Arindam Kumar; Angra, Surjit

    2018-03-01

    The flexible manufacturing system (FMS) constitute of several programmable production work centers, material handling systems (MHSs), assembly stations and automatic storage and retrieval systems. In FMS, the automatic guided vehicles (AGVs) play a vital role in material handling operations and enhance the performance of the FMS in its overall operations. To achieve low makespan and high throughput yield in the FMS operations, it is highly imperative to integrate the production work centers schedules with the AGVs schedules. The Production schedule for work centers is generated by application of the Giffler and Thompson algorithm under four kind of priority hybrid dispatching rules. Then the clonal selection algorithm (CSA) is applied for the simultaneous scheduling to reduce backtracking as well as distance travel of AGVs within the FMS facility. The proposed procedure is computationally tested on the benchmark FMS configuration from the literature and findings from the investigations clearly indicates that the CSA yields best results in comparison of other applied methods from the literature.

  7. The vision guidance and image processing of AGV

    NASA Astrophysics Data System (ADS)

    Feng, Tongqing; Jiao, Bin

    2017-08-01

    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  8. Multicriteria meta-heuristics for AGV dispatching control based on computational intelligence.

    PubMed

    Naso, David; Turchiano, Biagio

    2005-04-01

    In many manufacturing environments, automated guided vehicles are used to move the processed materials between various pickup and delivery points. The assignment of vehicles to unit loads is a complex problem that is often solved in real-time with simple dispatching rules. This paper proposes an automated guided vehicles dispatching approach based on computational intelligence. We adopt a fuzzy multicriteria decision strategy to simultaneously take into account multiple aspects in every dispatching decision. Since the typical short-term view of dispatching rules is one of the main limitations of such real-time assignment heuristics, we also incorporate in the multicriteria algorithm a specific heuristic rule that takes into account the empty-vehicle travel on a longer time-horizon. Moreover, we also adopt a genetic algorithm to tune the weights associated to each decision criteria in the global decision algorithm. The proposed approach is validated by means of a comparison with other dispatching rules, and with other recently proposed multicriteria dispatching strategies also based on computational Intelligence. The analysis of the results obtained by the proposed dispatching approach in both nominal and perturbed operating conditions (congestions, faults) confirms its effectiveness.

  9. A new memetic algorithm for mitigating tandem automated guided vehicle system partitioning problem

    NASA Astrophysics Data System (ADS)

    Pourrahimian, Parinaz

    2017-11-01

    Automated Guided Vehicle System (AGVS) provides the flexibility and automation demanded by Flexible Manufacturing System (FMS). However, with the growing concern on responsible management of resource use, it is crucial to manage these vehicles in an efficient way in order reduces travel time and controls conflicts and congestions. This paper presents the development process of a new Memetic Algorithm (MA) for optimizing partitioning problem of tandem AGVS. MAs employ a Genetic Algorithm (GA), as a global search, and apply a local search to bring the solutions to a local optimum point. A new Tabu Search (TS) has been developed and combined with a GA to refine the newly generated individuals by GA. The aim of the proposed algorithm is to minimize the maximum workload of the system. After all, the performance of the proposed algorithm is evaluated using Matlab. This study also compared the objective function of the proposed MA with GA. The results showed that the TS, as a local search, significantly improves the objective function of the GA for different system sizes with large and small numbers of zone by 1.26 in average.

  10. Adjunctive triamcinolone acetonide for Ahmed glaucoma valve implantation: a randomized clinical trial.

    PubMed

    Yazdani, Shahin; Doozandeh, Azadeh; Pakravan, Mohammad; Ownagh, Vahid; Yaseri, Mehdi

    2017-06-26

    To evaluate the effect of intraoperative sub-Tenon injection of triamcinolone acetonide (TA) as an adjunct to Ahmed glaucoma valve (AGV) implantation. In this triple-blind randomized clinical trial, 104 eyes with refractory glaucoma were randomly assigned to conventional AGV (non-TA group) or AGV with adjunctive triamcinolone (TA group). In the TA group, 10 mg TA was injected in the sub-Tenon space around the AGV plate intraoperatively. Patients were followed for 1 year. The main outcome measure was intraocular pressure (IOP). Other outcome measures included best-corrected visual acuity (BCVA), occurrence of hypertensive phase (HP), peak IOP, number of antiglaucoma medications, and complications. A total of 90 patients were included in the final analysis. Mean IOP was lower in the TA group at most follow-up visits; however, the difference was statistically significant only at the first month (p = 0.004). Linear mixed model showed that mean IOP was 1.5 mm Hg lower in the TA group throughout the study period (p = 0.006). Peak postoperative IOP was significantly lower in the TA group (19.3 ± 4.8 mm Hg versus 29 ± 9.2 mm Hg, p = 0.032). Rates of success (defined as 6 < IOP <21 mm Hg) were similar in both groups at 12 months. There was no difference in the occurrence of the HP between the 2 groups (p = 0.123). Loss of BCVA >2 lines was more common in the non-TA group (p = 0.032). Adjunctive intraoperative TA injection during AGV implantation can blunt peak IOP levels and reduce mean IOP up to 1 year. Visual outcomes also seem to be superior to standard surgery.

  11. Adjunctive Mitomycin C or Amniotic Membrane Transplantation for Ahmed Glaucoma Valve Implantation: A Randomized Clinical Trial.

    PubMed

    Yazdani, Shahin; Mahboobipour, Hassan; Pakravan, Mohammad; Doozandeh, Azadeh; Ghahari, Elham

    2016-05-01

    To determine whether adjunctive mitomycin C (MMC) or amniotic membrane transplantation (AMT) improve the outcomes of Ahmed glaucoma valve (AGV) implantation. This double-blind, stratified, 3-armed randomized clinical trial includes 75 eyes of 75 patients aged 7 to 75 years with refractory glaucoma. Eligible subjects underwent stratified block randomization; eyes were first stratified to surgery in the superior or inferior quadrants based on feasibility; in each subgroup, eyes were randomly assigned to the study arms using random blocks: conventional AGV implantation (group A, 25 eyes), AGV with MMC (group B, 25 eyes), and AGV with AMT (group C, 25 eyes). The 3 study groups were comparable regarding baseline characteristics and mean follow-up (P=0.288). A total of 68 patients including 23 eyes in group A, 25 eyes in group B, and 20 eyes group C completed the follow-up period and were analyzed. Intraocular pressure was lower in the MMC group only 3 weeks postoperatively (P=0.04) but comparable at other time intervals. Overall success rate was comparable in the 3 groups at 12 months (P=0.217). The number of eyes requiring medications (P=0.30), time to initiation of medications (P=0.13), and number of medications (P=0.22) were comparable. Hypertensive phase was slightly but insignificantly more common with standard surgery (82%) as compared with MMC-augmented (60%) and AMT-augmented (70%) procedures (P=0.23). Complications were comparable over 1 year (P=0.28). Although adjunctive MMC and AMT were safe during AGV implantation, they did not influence success rates or intraocular pressure outcomes. Complications, including hypertensive phase, were also comparable.

  12. Clinical outcomes after combined Ahmed glaucoma valve implantation and penetrating keratoplasty or pars plana vitrectomy.

    PubMed

    Lee, Jin Young; Sung, Kyung Rim; Tchah, Hung Won; Yoon, Young Hee; Kim, June Gone; Kim, Myoung Joon; Kim, Jae Yong; Yun, Sung-Cheol; Lee, Joo Yong

    2012-12-01

    To evaluate whether a combination of penetrating keratoplasty (PKP) or pars plana vitrectomy (PPV) and Ahmed glaucoma valve (AGV) implantation affords a level of success similar to that of AGV implantation alone. Eighteen eyes underwent simultaneous PPV and AGV, 14 eyes with PKP and AGV and 30 eyes with AGV implantation alone were evaluated. Success was defined as attainment of an intraocular pressure (IOP) >5 and <22 mmHg, with or without use of anti-glaucoma medication. Kaplan-Meier survival analysis was performed to compare cumulative survival between the combined surgery groups and the AGV implantation-alone group. Cox proportional hazard regression analysis was conducted to identify factors predictive of success in each of the three groups. Mean (±standard deviation) preoperative IOP was 30.2 ± 10.2 mmHg in the PKP + AGV, 35.2 ± 9.8 mmHg in the PPV + AGV, and 36.2 ± 10.1 mmHg in the AGV implantation-alone group. The cumulative success rate at 18 months was 66.9%, 73.2%, and 70.8% in the three groups, respectively. Neither combined surgery group differed significantly in terms of cumulative success rate compared with the AGV implantation-alone group (p = 0.556, p = 0.487, respectively). The mean number of preoperative anti-glaucoma medications prescribed was significantly associated with success in the PKP + AGV implantation group (hazard ratio, 2.942; p = 0.024). Either PKP or PPV performed in conjunction with AGV implantation afforded similar success rates compared to patients treated with AGV implantation alone. Therefore, in patients with refractory glaucoma who have underlying corneal or retinal pathology requiring treatment with PKP or PPV, AGV implantation can be performed simultaneously.

  13. A retrospective study on the outcomes of Ahmed valve versus Ahmed valve combined with fluocinolone implant in uveitic glaucoma

    PubMed Central

    Sevgi, Duriye D.; Davoudi, Samaneh; Talcott, Katherine E.; Cho, Heeyoon; Guo, Rong; Lobo, Ann-Marie; Papaliodis, George N.; Turalba, Angela; Sobrin, Lucia; Shen, Lucy Q.

    2017-01-01

    Purpose To compare the intraocular pressure (IOP) outcomes of Ahmed glaucoma valve (AGV) surgery alone versus AGV with fluocinolone implant in uveitic glaucoma patients. Methods We identified uveitic glaucoma patients with AGV surgery alone and AGV surgery combined with fluocinolone implant from the Massachusetts Eye and Ear Ocular Inflammation Database. Demographic information, visual acuity, and IOP were recorded at preoperative visits and 1, 6, and 12 months after surgery. Incidence of hypertensive phase, defined as an IOP of >21 mm Hg or use of additional treatment to lower IOP occurring any time between 7 days to 6 months postoperatively, was investigated. Multilevel mixed effects models were performed to compare the outcomes between groups. Results Eighteen eyes of 13 uveitic glaucoma patients with 1-year follow-up data were included. There were 11 eyes of 9 patients (mean age, 56.5 years; 63.6% male) in the AGV group and 7 eyes of 4 patients (mean age, 61.3 years; 71.4% male) in the AGV + fluocinolone group. There was no significant difference in visual acuity change at 1 year after surgery between groups (P = 0.25), although visual acuity improvement was significant in the AGV group (P = 0.01). The hypertensive phase occurred in 91% of AGV patients and 43% of AGV + fluocinolone patients (P = 0.30), with onset of 8-40 days (mean, 18 days) after surgery. IOP and number of glaucoma medications decreased at the 1-year postoperative visits in both the AGV group (P < 0.0001, P < 0.0001) and the AGV + fluocinolone group (P = 0.001, P < 0.0001). Compared to the AGV group, the AGV + fluocinolone group used fewer glaucoma medications (0.28 vs 1.30 [P = 0.01]) and had better inflammation control (P = 0.02). The surgical complication rates were similar between groups. Conclusions In uveitic glaucoma, AGV with fluocinolone achieves a similar, desired IOP control but with fewer glaucoma medications than AGV alone. PMID:29162989

  14. A retrospective study on the outcomes of Ahmed valve versus Ahmed valve combined with fluocinolone implant in uveitic glaucoma.

    PubMed

    Sevgi, Duriye D; Davoudi, Samaneh; Talcott, Katherine E; Cho, Heeyoon; Guo, Rong; Lobo, Ann-Marie; Papaliodis, George N; Turalba, Angela; Sobrin, Lucia; Shen, Lucy Q

    2017-01-01

    To compare the intraocular pressure (IOP) outcomes of Ahmed glaucoma valve (AGV) surgery alone versus AGV with fluocinolone implant in uveitic glaucoma patients. We identified uveitic glaucoma patients with AGV surgery alone and AGV surgery combined with fluocinolone implant from the Massachusetts Eye and Ear Ocular Inflammation Database. Demographic information, visual acuity, and IOP were recorded at preoperative visits and 1, 6, and 12 months after surgery. Incidence of hypertensive phase, defined as an IOP of >21 mm Hg or use of additional treatment to lower IOP occurring any time between 7 days to 6 months postoperatively, was investigated. Multilevel mixed effects models were performed to compare the outcomes between groups. Eighteen eyes of 13 uveitic glaucoma patients with 1-year follow-up data were included. There were 11 eyes of 9 patients (mean age, 56.5 years; 63.6% male) in the AGV group and 7 eyes of 4 patients (mean age, 61.3 years; 71.4% male) in the AGV + fluocinolone group. There was no significant difference in visual acuity change at 1 year after surgery between groups ( P = 0.25), although visual acuity improvement was significant in the AGV group ( P = 0.01). The hypertensive phase occurred in 91% of AGV patients and 43% of AGV + fluocinolone patients ( P = 0.30), with onset of 8-40 days (mean, 18 days) after surgery. IOP and number of glaucoma medications decreased at the 1-year postoperative visits in both the AGV group ( P < 0.0001, P < 0.0001) and the AGV + fluocinolone group ( P = 0.001, P < 0.0001). Compared to the AGV group, the AGV + fluocinolone group used fewer glaucoma medications (0.28 vs 1.30 [ P = 0.01]) and had better inflammation control ( P = 0.02). The surgical complication rates were similar between groups. In uveitic glaucoma, AGV with fluocinolone achieves a similar, desired IOP control but with fewer glaucoma medications than AGV alone.

  15. A nonlinear model predictive control formulation for obstacle avoidance in high-speed autonomous ground vehicles in unstructured environments

    NASA Astrophysics Data System (ADS)

    Liu, Jiechao; Jayakumar, Paramsothy; Stein, Jeffrey L.; Ersal, Tulga

    2018-06-01

    This paper presents a nonlinear model predictive control (MPC) formulation for obstacle avoidance in high-speed, large-size autono-mous ground vehicles (AGVs) with high centre of gravity (CoG) that operate in unstructured environments, such as military vehicles. The term 'unstructured' in this context denotes that there are no lanes or traffic rules to follow. Existing MPC formulations for passenger vehicles in structured environments do not readily apply to this context. Thus, a new nonlinear MPC formulation is developed to navigate an AGV from its initial position to a target position at high-speed safely. First, a new cost function formulation is used that aims to find the shortest path to the target position, since no reference trajectory exists in unstructured environments. Second, a region partitioning approach is used in conjunction with a multi-phase optimal control formulation to accommodate the complicated forms the obstacle-free region can assume due to the presence of multiple obstacles in the prediction horizon in an unstructured environment. Third, the no-wheel-lift-off condition, which is the major dynamical safety concern for high-speed, high-CoG AGVs, is ensured by limiting the steering angle within a range obtained offline using a 14 degrees-of-freedom vehicle dynamics model. Thus, a safe, high-speed navigation is enabled in an unstructured environment. Simulations of an AGV approaching multiple obstacles are provided to demonstrate the effectiveness of the algorithm.

  16. The efficacy of Ahmed glaucoma valve drainage devices in cases of adult refractory glaucoma in Indian eyes

    PubMed Central

    Parihar, Jitendra K S; Vats, Devendra P; Maggon, Rakesh; Mathur, Vijay; Singh, Anirudh; Mishra, Sanjay K

    2009-01-01

    Aim: To evaluate the efficacy of Ahmed glaucoma valve (AGV) drainage devices in cases of adult refractory glaucoma in Indian eyes. Settings and Design: Retrospective interventional case series study. Materials and Methods: Fifty two eyes of 32 patients of refractory glaucoma in the age group of 35 to 60 years who underwent AGV implantation with or without concomitant procedures from January 2003 to Jan 2007 were studied. Of these, 46 eyes (88%) had undergone filtering surgery earlier whereas remaining eyes underwent primary AGV implantation following failure of maximal medical therapy. The follow up ranged between 12 months to 48 months Results: Eighteen eyes (35%) had undergone phacoemulsification with AGV implantation, penetrating keratoplasty (PK) with AGV and intraocular lens (IOL) implantation in 13 eyes (25%), AGV over preexisting IOL in eight eyes (15%). AGV implantation alone was done in six (11%) eyes. Anterior chamber (AC) reconstruction with secondary IOL and AGV was performed in the remaining eyes. The mean intra ocular pressure (IOP) decreased from 36.3 ± 15.7 mm Hg to 19.6 ± 9.2 mm Hg. Complete success as per criteria was achieved in 46 eyes (88%). None of the eyes had failure to maintain IOP control following AGV. Conclusion: The AGV resulted in effective and sustained control of IOP in cases of adult refractory glaucoma in intermediate follow up. PMID:19700871

  17. Comparison of the Ahmed and Baerveldt glaucoma shunts with combined cataract extraction.

    PubMed

    Rai, Amrit S; Shoham-Hazon, Nir; Christakis, Panos G; Rai, Amandeep S; Ahmed, Iqbal Ike K

    2018-04-01

    To compare the surgical outcomes of combined phacoemulsification with either Ahmed glaucoma valve (AGV) or Baerveldt glaucoma implant (BGI). Retrospective cohort study. A total of 104 eyes that underwent combined phacoemulsification with either AGV (PhacoAGV; n = 57) or BGI (PhacoBGI; n = 47) implantation. Failure was defined as uncontrolled intraocular pressure (IOP; <5 mm Hg, ≥18 mm Hg, or <20% reduction), additional glaucoma surgery, vision-threatening complications, or progression to no-light-perception vision. The PhacoAGV group was older (p = 0.03), had poorer baseline visual acuity (VA; p = 0.001), and had fewer previous glaucoma surgeries (p = 0.04). Both groups had similar baseline IOP (PhacoAGV: 26.4 ± 8.3 mm Hg; PhacoBGI: 25.7 ± 7.3; p = 0.66) and glaucoma medications (PhacoAGV: 3.8 ± 1.0; PhacoBGI: 3.6 ± 1.5; p = 0.54). At 2 years, failure rates were 44% in the PhacoAGV group and 23% in the PhacoBGI group (p = 0.02). Both groups had similar mean IOP reduction (PhacoAGV: 45%; PhacoBGI: 47%, p = 0.67) and medication use reduction (PhacoAGV: 47%; PhacoBGI: 58%, p = 0.38). The PhacoBGI group had higher IOP and medication use up to 1 month (p < 0.05). Both groups improved in VA from baseline (p < 0.05) and had similar overall complication rates (p = 0.31). The PhacoBGI group required more overall interventions (p < 0.0005). This comparative study found no difference in IOP, glaucoma medications, or complication rates between PhacoAGV and PhacoBGI at 2 years, despite BGIs being implanted in patients at higher risk for failure. The PhacoAGV group had higher failure rates at 2 years. Both groups had significant improvements in VA due to removal of their cataracts. The PhacoBGI group required more interventions, but most of these were minor slit-lamp procedures. Copyright © 2018. Published by Elsevier Inc.

  18. Surgical Outcomes of Additional Ahmed Glaucoma Valve Implantation in Refractory Glaucoma.

    PubMed

    Ko, Sung Ju; Hwang, Young Hoon; Ahn, Sang Il; Kim, Hwang Ki

    2016-06-01

    To evaluate the surgical outcomes of the implantation of an additional Ahmed glaucoma valve (AGV) into the eyes of patients with refractory glaucoma following previous AGV implantation. This study is a retrospective review of the clinical histories of 23 patients who had undergone a second AGV implantation after a failed initial implantation. Age, sex, prior surgery, glaucoma type, number of medications, intraocular pressure (IOP), visual acuity, and surgical complications were analyzed. Surgical success was defined as IOP maintained below 21 mm Hg, with at least a 20% overall reduction in IOP, regardless of the use of IOP-lowering medications. Following the implantation of a second AGV, the mean IOP decreased from 39.3 to 18.5 mm Hg (52.9% reduction, P<0.001). The mean number of postoperative IOP-lowering medications administered decreased from 2.8 to 1.7 after the second AGV implantation (P<0.001). The cumulative probability of success for the procedure was 87% after 1 year and 52% after 3 years. Three patients (13.0%) experienced bullous keratopathy after the second AGV implantation. None of the patients showed any evidence of diplopia or ocular movement limitation as a result of the presence of 2 AGVs in the same eye. Prior trabeculectomy was found to be a significant risk factor for failure (P=0.027). A second AGV implantation can be a good choice of surgical treatment when the first AGV has failed to control IOP.

  19. Surgically Induced Scleral Necrosis in a Patient With Rheumatoid Arthritis After AGV Implantation.

    PubMed

    Kumar, Suresh; Ichhpujani, Parul; Thakur, Sahil

    2018-03-01

    Surgically induced scleral necrosis (SINS) is a rare entity that has till date not been reported in a patient of glaucoma undergoing Ahmed glaucoma valve (AGV) implantation. We present a case of primary open-angle glaucoma who underwent AGV implantation followed by development of scleral necrosis, involving both the scleral patch graft and host sclera. After failure of surgical and medical management, AGV had to be explanted. The patient was diagnosed with rheumatoid arthritis and had to be treated with steroids and azathioprine for the same. SINS is a potentially disastrous complication of ocular surgery that can occur in patients with systemic diseases like rheumatoid arthritis and requires aggressive management to salvage the eye. SINS can occur with AGV implantation. Treatment may require aggressive medical and surgical intervention. It is imperative to evaluate patients for systemic illness before planning an AGV implant.

  20. Five-year Treatment Outcomes in the Ahmed Baerveldt Comparison Study

    PubMed Central

    Budenz, Donald L.; Barton, Keith; Gedde, Steven J.; Feuer, William J.; Schiffman, Joyce; Costa, Vital P.; Godfrey, David G.; Buys, Yvonne M.

    2014-01-01

    Purpose To compare the five year outcomes of the Ahmed FP7 Glaucoma Valve (AGV) and the Baerveldt 101-350 Glaucoma Implant (BGI) for the treatment of refractory glaucoma. Design Multicenter randomized controlled clinical trial. Participants 276 patients, including 143 in the AGV group and 133 in the BGI group. Methods Patients 18 to 85 years of age with previous intraocular surgery or refractory glaucoma and intraocular pressure (IOP) of ≥ 18 mmHg in whom glaucoma drainage implant surgery was planned were randomized to implantation of either an AGV or BGI. Main Outcome Measures IOP, visual acuity, use of glaucoma medications, complications, and failure (IOP > 21 mmHg or not reduced by 20% from baseline, IOP ≤ 5 mmHg, reoperation for glaucoma, removal of implant, or loss of light perception). Results At 5 years, IOP (mean ± SD) was 14.7 ± 4.4 mmHg in the AGV group and 12.7 ± 4.5 mmHg in the BGI group (p = 0.012). The number of glaucoma medications in use at 5 years (mean ± SD) was 2.2 ± 1.4 in the AGV group and 1.8 ± 1.5 in the BGI group (p = 0.28). The cumulative probability of failure during 5 years of follow-up was 44.7% in the AGV group and 39.4% in the BGI group (p = 0.65). The number of subjects failing due to inadequately controlled IOP or reoperation for glaucoma was 46 in the AGV group (80% of AGV failures) and 25 in the BGI group (53% of BGI failures, p=0.003). Eleven AGV eyes (20% of AGV failures) experienced persistent hypotony, explantation of implant, or loss of light perception compared to 22 (47% of failures) in the BGI group. The 5-year cumulative reoperation rate for glaucoma was 20.8% in the AGV group compared to 8.6% in the BGI group (p=0.010). Change in logMAR Snellen visual acuity (mean ± SD) at 5 years was 0.42 ± 0.99 in the AGV group and 0.43 ± 0.84 in the BGI group (p=0.97). Conclusions Similar rates of surgical success were observed with both implants at 5 years. BGI implantation produced greater IOP reduction and a lower rate of glaucoma reoperation than AGV implantation but BGI implantation was associated with twice as many failures due to safety issues such as persistent hypotony, loss of light perception, or explantation. PMID:25439606

  1. Pars Plana-Modified versus Conventional Ahmed Glaucoma Valve in Patients Undergoing Penetrating Keratoplasty: A Prospective Comparative Randomized Study.

    PubMed

    Parihar, Jitendra Kumar Singh; Jain, Vaibhav Kumar; Kaushik, Jaya; Mishra, Avinash

    2017-03-01

    To compare the outcome of pars-plana-modified Ahmed glaucoma valve (AGV) versus limbal-based conventional AGV into the anterior chamber, in patients undergoing penetrating keratoplasty (PK) for glaucoma with coexisting corneal diseases. In this prospective randomized clinical trial, 58 eyes of 58 patients with glaucoma and coexisting corneal disease were divided into two groups. Group 1 (29 eyes of 29 patients) included patients undergoing limbal-based conventional AGV into the anterior chamber (AC) along-with PK and group 2 (29 eyes of 29 patients) included those undergoing pars-plana-modified AGV along-with PK. Outcome measures included corneal graft clarity, intraocular pressure (IOP), number of antiglaucoma medications, and postoperative complications. Patients were followed up for a minimum period of 2 years. Out of 58 eyes (58 patients), 50 eyes (50 patients: 25 eyes of 25 patients each in group 1 and group 2) completed the study and were analyzed. Complete success rate for AGV (group 1: 76%; group 2: 72%; p = 0.842) and corneal graft clarity (group 1: 68%; group 2: 76%; p = 0.081) were comparable between the two groups at 2 years. Graft failure was more in conventional AGV (32%) as compared to pars plana-modified AGV (24%) but not statistically significant (p = 0.078) at 2 years. Though both procedures were comparable in various outcome measures, pars-plana-modified AGV is a viable option for patients undergoing PK, as it provides a relatively better corneal graft survival rate and lesser complications that were associated with conventional AGV.

  2. A Review of the Ahmed Glaucoma Valve Implant and Comparison with Other Surgical Operations.

    PubMed

    Riva, Ivano; Roberti, Gloria; Katsanos, Andreas; Oddone, Francesco; Quaranta, Luciano

    2017-04-01

    The Ahmed glaucoma valve (AGV) is a popular glaucoma drainage implant used for the control of intraocular pressure in patients with glaucoma. While in the past AGV implantation was reserved for glaucoma patients poorly controlled after one or more filtration procedures, mounting evidence has recently encouraged its use as a primary surgery in selected cases. AGV has been demonstrated to be safe and effective in reducing intraocular pressure in patients with primary or secondary refractory glaucoma. Compared to other glaucoma surgeries, AGV implantation has shown favorable efficacy and safety. The aim of this article is to review the results of studies directly comparing AGV with other surgical procedures in patients with glaucoma.

  3. Comparative Analysis of 2-D Versus 3-D Ultrasound Estimation of the Fetal Adrenal Gland Volume and Prediction of Preterm Birth

    PubMed Central

    Turan, Ozhan M.; Turan, Sifa; Buhimschi, Irina A.; Funai, Edmund F.; Campbell, Katherine H.; Bahtiyar, Ozan M.; Harman, Chris R.; Copel, Joshua A.; Baschat, Ahmet A; Buhimschi, Catalin S.

    2013-01-01

    Objective We aim to test the hypothesis that 2D fetal AGV measurements offer similar volume estimates as volume calculations based on 3D technique Methods Fetal AGV was estimated by 3D ultrasound (VOCAL) in 93 women with signs/symptoms of preterm labor and 73 controls. Fetal AGV was calculated using an ellipsoid formula derived from 2D measurements of the same blocks (0.523× length × width × depth). Comparisons were performed by intra-class correlation coefficient (ICC), coefficient of repeatability, and Bland-Altman method. The cAGV (AGV/fetal weight) was calculated for both methods and compared for prediction of PTB within 7 days. Results Among 168 volumes, there was a significant correlation between 3D and 2D methods (ICC=0.979[95%CI: 0.971-0.984]). The coefficient of repeatability for the 3D was superior to the 2D method (Intra-observer 3D: 30.8, 2D:57.6; inter-observer 3D: 12.2, 2D: 15.6). Based on 2D calculations, a cAGV≥433mm3/kg, was best for prediction of PTB (sensitivity: 75%(95%CI=59-87); specificity: 89%(95%CI=82-94). Sensitivity and specificity for the 3D cAGV (cut-off ≥420mm3/kg) was 85%(95%CI=70-94) and 95%(95%CI=90-98), respectively. In receiver-operating-curve curve analysis, 3D cAGV was superior to 2D cAGV for prediction of PTB (z=1.99, p=0.047). Conclusion 2D volume estimation of fetal adrenal gland using ellipsoid formula cannot replace 3D AGV calculations for prediction of PTB. PMID:22644825

  4. Three-year Treatment Outcomes in the Ahmed Baerveldt Comparison Study

    PubMed Central

    Barton, Keith; Feuer, William J; Budenz, Donald L; Schiffman, Joyce; Costa, Vital P.; Godfrey, David G.; Buys, Yvonne M.

    2014-01-01

    Purpose To compare three year outcomes and complications of the Ahmed FP7 Glaucoma Valve (AGV) and Baerveldt 101–350 Glaucoma Implant (BGI) for the treatment of refractory glaucoma. Design Multicenter randomized controlled clinical trial. Participants 276 patients; 143 in the AGV group and 133 in the BGI group. Methods Patients aged 18–85 years with refractory glaucoma and intraocular pressures (IOPs) ≥18 mmHg in whom an aqueous shunt was planned were randomized to either an AGV or a BGI. Main Outcome Measures IOP, visual acuity, supplemental medical therapy, complications, and failure (IOP > 21 mmHg or not reduced by 20% from baseline, IOP ≤ 5 mmHg, reoperation for glaucoma or removal of implant, or loss of light perception vision). Results At 3 years, IOP (mean ± standard deviation) (SD) was 14.3 ± 4.7 mmHg (AGV group) and 13.1 ± 4.5 mmHg (BGI group) (p = 0.086) on 2.0 ± 1.4 and 1.5 ± 1.4 glaucoma medications respectively (p = 0.020). The cumulative probabilities of failure were 31.3% (standard error = 4.0%) (SE) (AGV) and 32.3% (4.2%) (BGI) (p = 0.99). Postoperative complications associated with reoperation or vision loss of ≥ 2 Snellen lines occurred in 24 patients (22%) (AGV) and 38 patients (36%) (BGI) (p = 0.035). The mean change in the Logarithm of the Minimum Angle of Resolution visual acuity (logMAR VA) at 3 years was similar (AGV: 0.21 ± 0.88, BGI: 0.26 ± 0.74) in the two treatment groups at 3 years (p=0.66). The cumulative proportion of patients (SE) undergoing reoperation for glaucoma prior to the three year postoperative time point was 14.5% (3.0%) in the AGV group compared to 7.6% (2.4%) in the BGI group (p=0.053, log-rank). The relative risk of reoperation for glaucoma in the AGV group was 2.1 times that of the BGI group (95% Confidence Interval:1.0–4.8, p=0.045; Cox proportional hazards regression). Conclusions AGV implantation was associated with the need for significantly greater adjunctive medication to achieve equal success relative to BGI implantation and resulted in a greater relative risk of reoperation for glaucoma. More subjects experienced serious postoperative complications in the BGI group than in the AGV group. PMID:24768240

  5. Treatment Outcomes in the Ahmed Baerveldt Comparison Study after One Year of Follow-up

    PubMed Central

    Budenz, Donald L; Barton, Keith; Feuer, William J; Schiffman, Joyce; Costa, Vital P.; Godfrey, David G.; Buys, Yvonne

    2010-01-01

    Purpose To determine the relative efficacy and complications of the Ahmed FP7 Glaucoma Valve (AGV) and the Baerveldt 101–350 Glaucoma Implant (BGI) in refractory glaucoma. Design Multicenter randomized controlled clinical trial. Participants 276 patients, including 143 in the AGV group and 133 in the BGI group. Methods Patients aged 18–85 years with refractory glaucoma with intraocular pressure (IOP) greater than or equal to 18 mm Hg in whom an aqueous shunt was planned were randomized to undergo implantation of either an AGV or a BGI. Main Outcome Measures Primary outcome was failure, defined as IOP > 21 mm Hg or not reduced by 20%, IOP ≤ 5 mm Hg, reoperation for glaucoma or removal of implant, or loss of light perception vision. Secondary outcomes included mean IOP, visual acuity, use of supplemental medical therapy, and complications. Results Preoperative IOP (mean ± standard deviation, SD) was 31.2 ± 11.2 in the AGV group and 31.8 ± 12.5 in the BGI group (p = 0.71). At 1 year, IOP was 15.4 ± 5.5 mm Hg in the AGV group and 13.2 ± 6.8 mm Hg in the BGI group (p = 0.007). The number of glaucoma medications (mean ± SD) was 1.8 ± 1.3 in the AGV group and 1.5 ± 1.4 in the BGI group (p = 0.071). The cumulative probability of failure was 16.4% (standard error, SE = 3.1%) in the AGV group and 14.0% (SE = 3.1%) in the BGI group at 1 year (p = 0.52). More patients experienced early postoperative complications in the BGI group (n = 77, 58%) compared to the AGV group (n = 61, 43%, p = 0.016). Serious postoperative complications associated with reoperation and/or vision loss of ≥ 2 Snellen lines occurred in 29 patients (20%) in the AGV group and 45 patients (34%) in the BGI group (p = 0.014). Conclusions Although the average IOP after one year was slightly higher in patients who received an AGV, there were fewer early and serious postoperative complications associated with the use of the AGV than the BGI. PMID:20932583

  6. Intelligent Transportation Systems: Automated Guided Vehicle Systems in Changing Logistics Environments

    NASA Astrophysics Data System (ADS)

    Schulze, L.; Behling, S.; Buhrs, S.

    2008-06-01

    The usage of Automated Guided Vehicle Systems (AGVS) is growing. This has not always been the case in the past. A new record of the sells numbers is the result of inventive developments, new applications and modern thinking. One market that AGVS were not able to thoroughly conquer yet were rapidly changing logistics environments. The advantages in recurrent transportation with AGVS used to be hindered by the needs of flexibility. When nowadays managers talk about Flexible Manufacturing Systems (FMS) there is no reason not to consider AGVS. Fixed guidelines, permanent transfer stations and static routes are no necessity for most AGVS producers. Flexible Manufacturing Systems can raise profitability with AGVS. When robots start saving billions in production costs, the next step at same plants are automated materials handling systems. Today, there are hundreds of instances of computer-controlled systems designed to handle and transport materials, many of which have replaced conventional human-driven platform trucks. Reduced costs due to damages and failures, tracking and tracing as well as improved production scheduling on top of fewer personnel needs are only some of the advantages.

  7. Intelligence Level Performance Standards Research for Autonomous Vehicles.

    PubMed

    Bostelman, Roger B; Hong, Tsai H; Messina, Elena

    2015-01-01

    United States and European safety standards have evolved to protect workers near Automatic Guided Vehicles (AGV's). However, performance standards for AGV's and mobile robots have only recently begun development. Lessons can be learned from research and standards efforts for mobile robots applied to emergency response and military applications. Research challenges, tests and evaluations, and programs to develop higher intelligence levels for vehicles can also used to guide industrial AGV developments towards more adaptable and intelligent systems. These other efforts also provide useful standards development criteria for AGV performance test methods. Current standards areas being considered for AGVs are for docking, navigation, obstacle avoidance, and the ground truth systems that measure performance. This paper provides a look to the future with standards developments in both the performance of vehicles and the dynamic perception systems that measure intelligent vehicle performance.

  8. Trabeculectomy with Ex-PRESS implant versus Ahmed glaucoma valve implantation-a comparative study

    PubMed Central

    Waisbourd, Michael; Fischer, Naomi; Shalev, Hadas; Spierer, Oriel; Ben Artsi, Elad; Rachmiel, Rony; Shemesh, Gabi; Kurtz, Shimon

    2016-01-01

    AIM To compare the surgical outcomes of trabeculectomy with Ex-PRESS implant and Ahmed glaucoma valve (AGV) implantation. METHODS Patients who underwent trabeculectomy with Ex-PRESS implants or AGV implantation separately were included in this retrospective chart review. Main outcome measures were surgical failure and complications. Failure was defined as intraocular pressure (IOP) >21 mm Hg or <5 mm Hg on two consecutive visits after 3mo, reoperation for glaucoma, or loss of light perception. Eyes that had not failed were considered as complete success if they did not required supplemental medical therapy. RESULTS A total of 64 eyes from 57 patients were included: 31 eyes in the Ex-PRESS group and 33 eyes in the AGV group. The mean follow-up time was 2.6±1.1y and 3.3±1.6y, respectively. Patients in the AGV group had significantly higher baseline mean IOP (P=0.005), lower baseline mean visual acuity (VA) (P=0.02), and higher proportion of patients with history of previous trabeculectomy (P<0.0001). Crude failure rates were 16.1%, n=5/31 in the Ex-PRESS group and 24.2%, n=8/33 in the AGV group. The cumulative proportion of failure was similar between the groups, P=0.696. The proportion of eyes that experienced postoperative complications was 32.3% in the Ex-PRESS group and 60.1% in the AGV group (P=0.0229). CONCLUSION Trabeculectomy with Ex-PRESS implant and AGV implantation had comparable failure rates. The AGV group had more post-operative complications, but also included more complex cases with higher baseline mean IOP, worse baseline mean VA, and more previous glaucoma surgeries. Therefore, the results are limited to the cohort included in this study. PMID:27803857

  9. Ahmed glaucoma valve in uveitic patients with fluocinolone acetonide implant-induced glaucoma: 3-year follow-up.

    PubMed

    Kubaisi, Buraa; Maleki, Arash; Ahmed, Aseef; Lamba, Neel; Sahawneh, Haitham; Stephenson, Andrew; Montieth, Alyssa; Topgi, Shobha; Foster, C Stephen

    2018-01-01

    To evaluate the efficacy and safety of Ahmed glaucoma valve (AGV) in eyes with noninfectious uveitis that had fluocinolone acetonide intravitreal implant (Retisert™)-induced glaucoma. This retrospective study reviewed the safety and efficacy of AGV implantation in patients with persistently elevated intraocular pressure (IOP) after implantation of a fluocinolone acetonide intravitreal implant at the Massachusetts Eye Research and Surgery Institution between August 2006 and November 2015. Nine patients with 10 uveitic eyes were included in this study, none of which had preexisting glaucoma in the study eye. Mean patient age was 42 years; 6 patients were female and 3 were male. Baseline mean IOP was 30.6 mmHg prior to AGV placement while mean IOP-lowering medications were 2.9. In the treatment groups, there was a statistically significant reduction in post-AGV IOP. IOP was lowest at 1-week after AGV implantation (9.0 mmHg). Nine out of 10 eyes achieved an IOP below target value of 22 mmHg and/or a 20% reduction in IOP from baseline 1 month and 1 year following AGV placement. All other postoperative time points showed all 10 eyes reaching this goal. A statistically significant decrease in IOP-lowering medication was seen at the 1-week, 1-month, and 3-year time points compared to baseline, while a statistically significant increase was seen at the 3-month, 6-month, and 2-year post-AGV time points. No significant change in retinal nerve thickness or visual field analysis was found. AGV is an effective and safe method of treatment in fluocinolone acetonide intravitreal implant-induced glaucoma. High survival rate is expected for at least 3 years.

  10. Flow Test to Predict Early Hypotony and Hypertensive Phase After Ahmed Glaucoma Valve (AGV) Surgical Implantation.

    PubMed

    Cheng, Jason; Beltran-Agullo, Laura; Buys, Yvonne M; Moss, Edward B; Gonzalez, Johanna; Trope, Graham E

    2016-06-01

    To assess the validity of a preimplantation flow test to predict early hypotony [intraocular pressure (IOP)≤5 mm Hg on 2 consecutive visits and hypertensive phase (HP) (IOP>21 mm Hg) after Ahmed Glaucoma Valve (AGV) implantation. Prospective interventional study on patients receiving an AGV. A preimplantation flow test using a gravity-driven reservoir and an open manometer was performed on all AGVs. Opening pressure (OP) and closing pressure (CP) were defined as the pressure at which fluid was seen to flow or stop flowing through the AGV, respectively. OP and CP were measured twice per AGV. Patients were followed for 12 weeks. In total, 20 eyes from 19 patients were enrolled. At 12 weeks the mean IOP decreased from 29.2±9.1 to 16.8±5.2 mm Hg (P<0.01). The mean AGV OP was 17.5±5.4 mm Hg and the mean CP was 6.7±2.3 mm Hg. Early (within 2 wk postoperative) HP occurred in 37% and hypotony in 16% of cases. An 18 mm Hg cutoff for the OP gave a sensitivity of 0.71, specificity of 0.83, positive predictive value of 0.71, and negative predictive value of 0.83 for predicting an early HP. A 7 mm Hg cutoff for the CP yielded a sensitivity of 1.0, specificity of 0.38, positive predictive value of 0.23, and negative predictive value of 1.0 for predicting hypotony. Preoperative OP and CP may predict early hypotony or HP and may be used as a guide as to which AGV valves to discard before implantation surgery.

  11. Ahmed glaucoma valve in uveitic patients with fluocinolone acetonide implant-induced glaucoma: 3-year follow-up

    PubMed Central

    Kubaisi, Buraa; Maleki, Arash; Ahmed, Aseef; Lamba, Neel; Sahawneh, Haitham; Stephenson, Andrew; Montieth, Alyssa; Topgi, Shobha; Foster, C Stephen

    2018-01-01

    Purpose To evaluate the efficacy and safety of Ahmed glaucoma valve (AGV) in eyes with noninfectious uveitis that had fluocinolone acetonide intravitreal implant (Retisert™)-induced glaucoma. Methods This retrospective study reviewed the safety and efficacy of AGV implantation in patients with persistently elevated intraocular pressure (IOP) after implantation of a fluocinolone acetonide intravitreal implant at the Massachusetts Eye Research and Surgery Institution between August 2006 and November 2015. Results Nine patients with 10 uveitic eyes were included in this study, none of which had preexisting glaucoma in the study eye. Mean patient age was 42 years; 6 patients were female and 3 were male. Baseline mean IOP was 30.6 mmHg prior to AGV placement while mean IOP-lowering medications were 2.9. In the treatment groups, there was a statistically significant reduction in post-AGV IOP. IOP was lowest at 1-week after AGV implantation (9.0 mmHg). Nine out of 10 eyes achieved an IOP below target value of 22 mmHg and/or a 20% reduction in IOP from baseline 1 month and 1 year following AGV placement. All other postoperative time points showed all 10 eyes reaching this goal. A statistically significant decrease in IOP-lowering medication was seen at the 1-week, 1-month, and 3-year time points compared to baseline, while a statistically significant increase was seen at the 3-month, 6-month, and 2-year post-AGV time points. No significant change in retinal nerve thickness or visual field analysis was found. Conclusion AGV is an effective and safe method of treatment in fluocinolone acetonide intravitreal implant-induced glaucoma. High survival rate is expected for at least 3 years. PMID:29750012

  12. Dynamic tube movement after reimplantation of Ahmed glaucoma valve in a child with glaucoma in aphakia

    PubMed Central

    Senthil, Sirisha; Badakare, Akshay

    2014-01-01

    A 10-year-old girl underwent an Ahmed glaucoma valve (AGV) implantation as a primary procedure for glaucoma in aphakia due to congenital cataract surgery. Following an unintended accidental excision of AGV tube during bleb revision for hypertensive phase, AGV was explanted and a second AGV was implanted in the same quadrant after 2 weeks. This resulted in a rare complication of dynamic tube movement in the anterior chamber with tube corneal touch and localised corneal oedema. Excision of the offending unstable tube and placement of a paediatric AGV in a different quadrant led to resolution of this complication, stable vision and well-controlled intraocular pressure. This case highlights the possible causes of dynamic tube, related complications and its management. This case also highlights the importance of understanding the various physiological phases after glaucoma drainage device implantation and their appropriate management. PMID:24695662

  13. Accurate method for preoperative estimation of the right graft volume in adult-to-adult living donor liver transplantation.

    PubMed

    Khalaf, H; Shoukri, M; Al-Kadhi, Y; Neimatallah, M; Al-Sebayel, M

    2007-06-01

    Accurate estimation of graft volume is crucial to avoid small-for-size syndrome following adult-to-adult living donor liver transplantation AALDLT). Herein, we combined radiological and mathematical approaches for preoperative assessment of right graft volume. The right graft volume was preoperatively estimated in 31 live donors using two methods: first, the radiological graft volume (RGV) by computed tomography (CT) volumetry and second, a calculated graft volume (CGV) obtained by multiplying the standard liver volume by the percentage of the right graft volume (given by CT). Both methods were compared to the actual graft volume (AGV) measured during surgery. The graft recipient weight ratio (GRWR) was also calculated using all three volumes (RGV, CGV, and AGV). Lin's concordance correlation coefficient (CCC) was used to assess the agreement between AGV and both RGV and CGV. This was repeated using the GRWR measurements. The mean percentage of right graft volume was 62.4% (range, 55%-68%; SD +/- 3.27%). The CCC between AGV and RGV versus CGV was 0.38 and 0.66, respectively. The CCC between GRWR using AGV and RGV versus CGV was 0.63 and 0.88, respectively (P < .05). According to the Landis and Kock benchmark, the CGV correlated better with AGV when compared to RGV. The better correlation became even more apparent when applied to GRWR. In our experience, CGV showed a better correlation with AGV compared with the RGV. Using CGV in conjunction with RGV may be of value for a more accurate estimation of right graft volume for AALDLT.

  14. Early Ahmed Glaucoma Valve Implantation after Penetrating Keratoplasty Leads to Better Outcomes in an Asian Population with Preexisting Glaucoma

    PubMed Central

    Tai, Ming-Cheng; Chen, Yi-Hao; Cheng, Jen-Hao; Liang, Chang-Min; Chen, Jiann-Torng; Chen, Ching-Long; Lu, Da-Wen

    2012-01-01

    Background To evaluate the efficacy of Ahmed Glaucoma Valve (AGV) surgery and the optimal interval between penetrating keratoplasty (PKP) and AGV implantation in a population of Asian patients with preexisting glaucoma who underwent PKP. Methodology/Principal Findings In total, 45 eyes of 45 patients were included in this retrospective chart review. The final intraocular pressures (IOPs), graft survival rate, and changes in visual acuity were assessed to evaluate the outcomes of AGV implantations in eyes in which AGV implantation occurred within 1 month of post-PKP IOP elevation (Group 1) and in eyes in which AGV implantation took place more than 1 month after the post-PKP IOP evaluation (Group 2). Factors that were associated with graft failure were analyzed, and the overall patterns of complications were reviewed. By their final follow-up visits, 58% of the patients had been successfully treated for glaucoma. After the operation, there were no statistically significant differences between the groups with respect to graft survival (p = 0.98), but significant differences for IOP control (p = 0.049) and the maintenance of visual acuity (VA) (p<0.05) were observed. One year after surgery, the success rates of IOP control in Group 1 and Group 2 were 80% and 46.7%, respectively, and these rates fell to 70% and 37.3%, respectively, by 2 years. Factors that were associated with a high risk of AGV failure were a diagnosis of preexisting angle-closure glaucoma, a history of previous PKP, and a preoperative IOP that was >21 mm Hg. The most common surgical complication, aside from graft failure, was hyphema. Conclusions/Significance Early AGV implantation results in a higher probability of AGV survival and a better VA outcome without increasing the risk of corneal graft failure as a result of post-PKP glaucoma drainage tube implantation. PMID:22629464

  15. Changes in Corneal Endothelial Cell after Ahmed Glaucoma Valve Implantation and Trabeculectomy: 1-Year Follow-up.

    PubMed

    Kim, Min Su; Kim, Kyoung Nam; Kim, Chang-Sik

    2016-12-01

    To compare changes in corneal endothelial cell density (CECD) after Ahmed glaucoma valve (AGV) implantation and trabeculectomy. Changes in corneal endothelium in patients that underwent AGV implantation or trabeculectomy were prospectively evaluated. Corneal specular microscopy was performed at the central cornea using a non-contact specular microscope before surgery and 6 months and 12 months after surgery. The CECD, hexagonality of the endothelial cells, and the coefficient of variation of the cell areas were compared between the two groups. Forty eyes of 40 patients with AGV implantation and 28 eyes of 28 patients with trabeculectomy were studied. Intraocular pressure in the AGV implantation group was significantly higher than that in the trabeculectomy group ( p < 0.001), but there was no significant difference in other clinical variables between the two groups. In the AGV implantation group, the mean CECD significantly decreased by 9.4% at 6 months and 12.3% at 12 months compared with baseline values (both, p < 0.001), while it decreased by 1.9% at 6 months and 3.2% at 12 months in the trabeculectomy group ( p = 0.027 and p = 0.015, respectively). The changes at 6 months and 12 months in the AGV implantation group were significantly higher than those in the trabeculectomy group ( p = 0.030 and p = 0.027, respectively). In the AGV implantation group, there was a significant decrease in the CECD between baseline and 6 months and between 6 months and 12 months ( p < 0.001 and p = 0.005, respectively). However, in the trabeculectomy group, a significant decrease was observed only between baseline and 6 months ( p = 0.027). Both the AGV implantation group and the trabeculectomy group showed statistically significant decreases in the CECD 1 year after surgery. The decrease in CECD in the AVG implantation group was greater and persisted longer than that in the trabeculectomy group.

  16. Comparison of Ahmed glaucoma valve implantation and trabeculectomy for glaucoma: a systematic review and meta-analysis.

    PubMed

    HaiBo, Tan; Xin, Kang; ShiHeng, Lu; Lin, Liu

    2015-01-01

    To compare the efficacy and safety of Ahmed glaucoma valve implantation (AGV) with trabeculectomy in the management of glaucoma patients. A comprehensive literature search (PubMed, Embase, Google, and the Cochrane library) was performed, including a systematic review with meta-analysis of controlled clinical trials comparing AGV versus trabeculectomy. Efficacy estimates were the weighted mean differences (WMDs) for the percentage intraocular pressure reduction (IOPR %) from baseline to end-point, the reduction in glaucoma medications, and the odds ratios (ORs) for complete and qualified success rates. Safety estimates were the relative risks (RRs) for adverse events. All outcomes were reported with a 95% confidence interval (CI). Statistical analysis was performed using the RevMan 5.0 software. Six controlled clinical trials were included in this meta-analysis. There was no significant difference between the AGV and trabeculectomy in the IOPR% (WMD = -3.04, 95% CI: -8.36- 2.26; P = 0.26). The pooled ORs comparing AGV with trabeculectomy were 0.46 (0.22, 0.99) for the complete success rate (P = 0.05) and 0.97 (0.78-1.20) for the quantified success rate (P = 0.76). No significant difference in the reduction in glaucoma medicines was observed (WMD = 0.24; 95% CI: -0.27-0.76; P = 0.35). AGV was found to be associated with a significantly lower frequency of all adverse events (RR = 0.71; 95%CI: 1.14-0.97; p = 0.001) than trabeculectomy, while the most common complications did not differ significantly (all p> 0.05). AGV was equivalent to trabeculectomy in reducing the IOP, the number of glaucoma medications, success rates, and rates of the most common complications. However, AGV was associated with a significantly lower frequency of overall adverse events.

  17. Ahmed Versus Baerveldt Glaucoma Drainage Implantation in Patients With Markedly Elevated Intraocular Pressure (≥30 mm Hg).

    PubMed

    Resende, Arthur F; Moster, Marlene R; Patel, Neal S; Lee, Daniel; Dhami, Hermandeep; Pro, Michael J; Waisbourd, Michael

    2016-09-01

    Glaucoma patients with markedly elevated intraocular pressure (IOP) are at risk for developing severe hypotony-related complications. The goal of this study was to compare the surgical outcomes of the Ahmed Glaucoma Valve (AGV) and the Baerveldt Glaucoma Implant (BGI) in this patient population. Patients with preoperative IOP≥30 mm Hg were included. Outcome measures were: (1) surgical failure (IOP>21 mm Hg or <30% reduction from baseline or IOP≤5 mm Hg on 2 consecutive follow-up visits after 3 mo, or additional glaucoma surgery, or loss of light perception) and (2) surgical complications. A total of 75 patients were included: 37 in the AGV group and 38 in the BGI group. The mean±SD follow-up was 2.3±1.6 years for the AGV group and 2.4±1.7 years for the BGI group (P=0.643). Mean preoperative IOP was 38.7±6.5 mm Hg for the AGV group and 40.8±7.6 mm Hg for the BGI group. At the last follow-up, 10 (27.0%) patients failed in the AGV group compared with 6 (15.8%) patients in the BGI group (P=0.379). The BGI group had higher rate of flat or shallow anterior chamber (n=4, 10%) compared with the AGV group (n=0, 0%) (P=0.043). Failure rates of AGV and BGI in patients with IOP≥30 mm Hg were comparable. There were more early hypotony-related complications in the BGI group; however, none were vision threatening. Both glaucoma drainage implants were effective in treating patients with uncontrolled glaucoma in an emergency setting.

  18. Wound Dehiscence and Device Migration after Subconjunctival Bevacizumab Injection with Ahmed Glaucoma Valve Implantation.

    PubMed

    Miraftabi, Arezoo; Nilforushan, Naveed

    2016-01-01

    To report a complication pertaining to subconjunctival bevacizumab injection as an adjunct to Ahmed Glaucoma Valve (AGV) implantation. A 54-year-old woman with history of complicated cataract surgery was referred for advanced intractable glaucoma. AGV implantation with adjunctive subconjunctival bevacizumab (1.25 mg) was performed with satisfactory results during the first postoperative week. However, 10 days after surgery, she developed wound dehiscence and tube exposure. The second case was a 33-year-old man with history of congenital glaucoma and uncontrolled IOP who developed AGV exposure and wound dehiscence after surgery. In both cases, for prevention of endophthalmitis and corneal damage by the unstable tube, the shunt was removed and the conjunctiva was re-sutured. The potential adverse effect of subconjunctival bevacizumab injection on wound healing should be considered in AGV surgery.

  19. Wound Dehiscence and Device Migration after Subconjunctival Bevacizumab Injection with Ahmed Glaucoma Valve Implantation

    PubMed Central

    Miraftabi, Arezoo; Nilforushan, Naveed

    2016-01-01

    Purpose: To report a complication pertaining to subconjunctival bevacizumab injection as an adjunct to Ahmed Glaucoma Valve (AGV) implantation. Case Report: A 54-year-old woman with history of complicated cataract surgery was referred for advanced intractable glaucoma. AGV implantation with adjunctive subconjunctival bevacizumab (1.25 mg) was performed with satisfactory results during the first postoperative week. However, 10 days after surgery, she developed wound dehiscence and tube exposure. The second case was a 33-year-old man with history of congenital glaucoma and uncontrolled IOP who developed AGV exposure and wound dehiscence after surgery. In both cases, for prevention of endophthalmitis and corneal damage by the unstable tube, the shunt was removed and the conjunctiva was re-sutured. Conclusion: The potential adverse effect of subconjunctival bevacizumab injection on wound healing should be considered in AGV surgery. PMID:27195095

  20. Comparison of the Ahmed glaucoma valve with the Baerveldt glaucoma implant: a meta-analysis.

    PubMed

    Wang, Yi-Wen; Wang, Ping-Bao; Zeng, Chao; Xia, Xiao-Bo

    2015-10-13

    This study aims to compare the efficacy and safety of the Ahmed glaucoma valve (AGV) with the Baerveldt glaucoma implant (BGI) in glaucoma patients. Databases were searched to identify studies that met pre-stated inclusion criteria, involving randomized controlled clinical trials (RCTs) and non-randomized controlled clinical trials. Treatment effect was analyzed using a random-effect model. Ten controlled clinical trials (1048 eyes) were analyzed, involving two RCTs and eight retrospective comparative studies. Short-term results (6-18 months) and long-term results (>18 months) were analyzed separately. There was no significant difference in the success rate for short-term follow-up between the AGV and BGI groups (5 studies, 714 eyes, odds ratio [OR]: 0.97; 95 % confidence interval [CI]: 0.56, 1.66; P = 0.90). For long-term pooled results (7 studies, 835 eyes), the success rate of AGVs was lower than that of BGIs (OR: 0.73; 95 % CI: 0.54, 0.99, P = 0.04), However, subgroup and sensitivity analyses did not show a significant difference in the success rate between the two groups (P ≥0.05). The AGV group had a higher mean intraocular pressure than the BGI group in short-term (6 studies, 685 eyes, weighted mean difference [WMD]: 2.12 mmHg; 95 % CI: 0.72-3.52; P <0.05) and long-term pooled results (7 studies, 659 eyes, WMD: 1.85 mmHg; 95 % CI: 0.43, 3.28; P = 0.01). The BGI group required fewer glaucoma medications after implantation than the AGV group in two follow-up periods (all P <0.05). The AGV was found to be associated with a significantly lower frequency of total complications (8 studies, 971 eyes, OR: 0.67; 95 % CI: 0.50-0.90; P = 0.007) and severe complications (8 studies, 971 eyes, OR: 0.57; 95 % CI: 0.36-0.91; P = 0.02) than the BGI. The study showed no significant difference in success rate between the two groups. The BGI was more effective for control of intraocular pressure and required fewer medications than the AGV, but the AGV had lower incidence of total and severe complications than the BGI.

  1. Ahmed glaucoma valve in eyes with preexisting episcleral encircling element.

    PubMed

    Choudhari, Nikhil Shreeram; George, Ronnie; Shantha, Balekudaru; Neog, Aditya; Tripathi, Shweta; Srinivasan, Bhaskar; Vijaya, Lingam

    2014-05-01

    To describe the use of Ahmed glaucoma valve (AGV) in the management of intractable glaucoma in eyes with a preexisting episcleral encircling element. This is a retrospective, consecutive, noncomparative study. The study included 12 eyes of 12 patients with a preexisting episcleral encircling element that underwent implantation of silicone AGV to treat intractable glaucoma during January 2009 to September 2010. The mean patient age was 25.6 (standard deviation 17.1) years. Five (41.6%) patients were monocular. The indications for AGV were varied. The mean duration between placement of episcleral encircling element and implantation of AGV was 30.5 (33.8) months. The mean follow-up was 37.4 (22.9) weeks. Preoperatively, the mean intraocular pressure (IOP) was 31.4 (7.9) mmHg and the mean antiglaucoma medications were 2.8. At the final postoperative follow-up, the mean IOP was 12.5 (3.5) mmHg and the mean number of antiglaucoma medications was 0.8 (P < 0.001). The complications observed over the follow-up period did include corneal graft failure in three eyes, tube erosion in two eyes and rhegmatogenous retinal detachment in one eye. AGV is an effective option in the management of intractable glaucoma in eyes with a preexisting episcleral encircling element keeping in mind the possibility of significant postoperative complications.

  2. Two-year survival of Ahmed valve implantation in the first 2 years of life with and without intraoperative mitomycin-C.

    PubMed

    Al-Mobarak, Faisal; Khan, Arif O

    2009-10-01

    To evaluate the effect of intraoperative mitomycin-C (MMC) on polypropylene Ahmed glaucoma valve (AGV) survival 2 years after implantation during the first 2 years of life. Retrospective institutional comparative series (1995-2005). Thirty-one eyes of 27 patients (23 unilateral, 4 bilateral; 16 boys, 11 girls) undergoing AGV implantation at a mean age of 11.1 months (standard deviation [SD], 5.46), all of which had 2 years of regular postoperative follow-up. MMC was applied intraoperatively in those cases in the area of AGV implantation in 16 (52%) and was not applied in 15 (48%). In some eyes, MMC was applied intraoperatively in cases done by the surgeons who routinely used MMC for all AGV implantation in young children. Failure was defined as intraocular pressure (IOP) > 22 mmHg with or without glaucoma medications, the need for an additional procedure for IOP control, or the occurrence of significant complications (e.g., endophthalmitis, retinal detachment, persistent hypotony [IOP < 5 mmHg]). Survival was the absence of failure. Failure or significant complications as defined. Mean survival for the non-MMC eyes (22.15 months; standard error [SE], 1.93) was significantly longer than survival for the MMC eyes (16.25 months; SE, 2.17) by the log-rank test (P = 0.025). The difference in cumulative survival at 2 years was also significantly different by log-rank test (P = 0.001): 80.0% (SE 10.3) and 31.3% (SE 11.6), respectively. Rather than improved survival, intraoperative use of MMC was associated with shorter survival 2 years after AGV implantation during the first 2 years of life. We speculate that MMC-induced tissue death can stimulate a reactive fibrosis around the AGV in very young eyes.

  3. Pericardium Plug in the Repair of the Corneoscleral Fistula After Ahmed Glaucoma Valve Explantation

    PubMed Central

    Yoo, Chungkwon; Kwon, Sung Wook

    2008-01-01

    We report four cases in which a pericardium (Tutoplast®) plug was used to repair a corneoscleral fistula after Ahmed Glaucoma Valve (AGV) explantation. In four cases in which the AGV tube had been exposed, AGV explantation was performed using a pericardium (Tutoplast®) plug to seal the defect previously occupied by the tube. After debridement of the fistula, a piece of processed pericardium (Tutoplast®), measured 1 mm in width, was plugged into the fistula and secured with two interrupted 10-0 nylon sutures. To control intraocular pressure, a new AGV was implanted elsewhere in case 1, phaco-trabeculectomy was performed concurrently in case 2, cyclophotocoagulation was performed postoperatively in case 3 and anti-glaucomatous medication was added in case 4. No complication related to the fistula developed at the latest follow-up (range: 12~26 months). The pericardium (Tutoplast®) plug seems to be an effective method in the repair of corneoscleral fistulas resulting from explantation of glaucoma drainage implants. PMID:19096247

  4. Changes in corneal endothelial cell density and the cumulative risk of corneal decompensation after Ahmed glaucoma valve implantation.

    PubMed

    Kim, Kyoung Nam; Lee, Sung Bok; Lee, Yeon Hee; Lee, Jong Joo; Lim, Hyung Bin; Kim, Chang-Sik

    2016-07-01

    To evaluate changes in the corneal endothelial cell density (ECD) and corneal decompensation following Ahmed glaucoma valve (AGV) implantation. This study was retrospective and observational case series. Patients with refractory glaucoma who underwent AGV implantation and were followed >5 years were consecutively enrolled. We reviewed the medical records, including the results of central corneal specular microscopy. Of the 127 enrolled patients, the annual change in ECD (%) was determined using linear regression for 72 eyes evaluated at least four times using serial specular microscopic examination and compared with 31 control eyes (fellow glaucomatous eyes under medical treatment). The main outcome measures were cumulative risk of corneal decompensation and differences in the ECD loss rates between subjects and controls. The mean follow-up after AGV implantation was 43.1 months. There were no cases of postoperative tube-corneal touch. The cumulative risk of corneal decompensation was 3.3%, 5 years after AGV implantation. There was a more rapid loss of ECD in the 72 subject eyes compared with the 31 controls (-7.0% and -0.1%/year, respectively; p<0.001). However, the rate of loss decreased over time and statistical significance compared with control eyes disappeared after 2 years postoperatively: -10.7% from baseline to 1 year (p<0.01), -7.0% from 1 year to 2 years (p=0.037), -4.2% from 2 years to 3 years (p=0.230) and -2.7% from 3 years to the final follow-up (p=0.111). In case of uncomplicated AGV implantation, the cumulative risk of corneal decompensation was 3.3%, 5 years after the operation. The ECD loss was statistically greater in eyes with AGV than in control eyes without AGV, but the difference was significant only up to 2 years post surgery. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  5. Modeling and deadlock avoidance of automated manufacturing systems with multiple automated guided vehicles.

    PubMed

    Wu, Naiqi; Zhou, MengChu

    2005-12-01

    An automated manufacturing system (AMS) contains a number of versatile machines (or workstations), buffers, an automated material handling system (MHS), and is computer-controlled. An effective and flexible alternative for implementing MHS is to use automated guided vehicle (AGV) system. The deadlock issue in AMS is very important in its operation and has extensively been studied. The deadlock problems were separately treated for parts in production and transportation and many techniques were developed for each problem. However, such treatment does not take the advantage of the flexibility offered by multiple AGVs. In general, it is intractable to obtain maximally permissive control policy for either problem. Instead, this paper investigates these two problems in an integrated way. First we model an AGV system and part processing processes by resource-oriented Petri nets, respectively. Then the two models are integrated by using macro transitions. Based on the combined model, a novel control policy for deadlock avoidance is proposed. It is shown to be maximally permissive with computational complexity of O (n2) where n is the number of machines in AMS if the complexity for controlling the part transportation by AGVs is not considered. Thus, the complexity of deadlock avoidance for the whole system is bounded by the complexity in controlling the AGV system. An illustrative example shows its application and power.

  6. Ahmed glaucoma valve in eyes with preexisting episcleral encircling element

    PubMed Central

    Choudhari, Nikhil Shreeram; George, Ronnie; Shantha, Balekudaru; Neog, Aditya; Tripathi, Shweta; Srinivasan, Bhaskar; Vijaya, Lingam

    2014-01-01

    Background: To describe the use of Ahmed glaucoma valve (AGV) in the management of intractable glaucoma in eyes with a preexisting episcleral encircling element. Materials and Methods: This is a retrospective, consecutive, noncomparative study. The study included 12 eyes of 12 patients with a preexisting episcleral encircling element that underwent implantation of silicone AGV to treat intractable glaucoma during January 2009 to September 2010. Results: The mean patient age was 25.6 (standard deviation 17.1) years. Five (41.6%) patients were monocular. The indications for AGV were varied. The mean duration between placement of episcleral encircling element and implantation of AGV was 30.5 (33.8) months. The mean follow-up was 37.4 (22.9) weeks. Preoperatively, the mean intraocular pressure (IOP) was 31.4 (7.9) mmHg and the mean antiglaucoma medications were 2.8. At the final postoperative follow-up, the mean IOP was 12.5 (3.5) mmHg and the mean number of antiglaucoma medications was 0.8 (P < 0.001). The complications observed over the follow-up period did include corneal graft failure in three eyes, tube erosion in two eyes and rhegmatogenous retinal detachment in one eye. Conclusion: AGV is an effective option in the management of intractable glaucoma in eyes with a preexisting episcleral encircling element keeping in mind the possibility of significant postoperative complications. PMID:24881603

  7. Intracameral air injection during Ahmed glaucoma valve implantation in neovascular glaucoma for the prevention of tube obstruction with blood clot: Case Report.

    PubMed

    Hwang, Sung Ha; Yoo, Chungkwon; Kim, Yong Yeon; Lee, Dae Young; Nam, Dong Heun; Lee, Jong Yeon

    2017-12-01

    Glaucoma drainage implant surgery is a treatment option for the management of neovascular glaucoma. However, tube obstruction by blood clot after Ahmed glaucoma valve (AGV) implantation is an unpredictable clinically challenging situation. We report 4 cases using intracameral air injection for the prevention of the tube obstruction of AGV by blood clot. The first case was a 57-year-old female suffering from ocular pain because of a tube obstruction with blood clot after AGV implantation in neovascular glaucoma. Surgical blood clot removal was performed. However, intractable bleeding was noted during the removal of the blood clot, and so intracameral air injection was performed to prevent a recurrent tube obstruction. After the procedure, although blood clots formed around the tube, the tube opening where air could touch remained patent. In 3 cases of neovascular glaucoma with preoperative severe intraocular hemorrhages, intracameral air injection and AGV implantation were performed simultaneously. In all 3 cases, tube openings were patent. It appears that air impeded the blood clots formation in front of the tube opening. Intracameral air injection could be a feasible option to prevent tube obstruction of AGV implant with a blood clot in neovascular glaucoma with high risk of tube obstruction. Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. All rights reserved.

  8. Outcomes and Complications of Ahmed Tube Implantation in Asian Eyes.

    PubMed

    Choo, Jessica Qian Hui; Chen, Ziyou David; Koh, Victor; Liang, Shen; Aquino, Cecilia Maria; Sng, Chelvin; Chew, Paul

    2018-06-18

    There is a lack of long-term Asian studies on the efficacy and safety of Ahmed glaucoma valve (AGV) implantation. This study seeks to determine the outcomes and complications of AGV implantation in Asians. Retrospective review of AGV surgeries performed at a single centre in Singapore was conducted. 76 patients with primary and secondary glaucoma who underwent their first AGV surgery from 1st January 2010 to 31st December 2012 were considered for our study. Primary outcomes evaluated were: failure, intra-ocular pressure, best-corrected visual acuity (BCVA), number of IOP-lowering medications and complications. Failure was defined by: IOP >21▒mm Hg on two consecutive visits after 3 months, IOP ≤5▒mm Hg on two consecutive visits after 3 months, reoperation for glaucoma, removal of implant or loss of light perception vision. Mean follow-up duration was 33.2±6.9 months. There was significant reduction in IOP (mean reduction 25.9%, P<0.001) and number of IOP-lowering medications (mean reduction 77.8%, P<0.001) at 3 years. Absolute failure rate was 23.9% at 3 years with no difference between eyes with or without previous trabeculectomy and between eyes with primary or secondary glaucoma. Occurrence of post-operative hyphema was a significant risk factor for failure. Commonest post-operative complications were hyphema and tube exposure. At 3 years after AGV surgery in Asian eyes, less than one-quarter of the eyes fulfilled the criteria for surgical failure.

  9. Short- to long-term results of Ahmed glaucoma valve in the management of elevated intraocular pressure in patients with pediatric uveitis.

    PubMed

    Eksioglu, Umit; Yakin, Mehmet; Sungur, Gulten; Satana, Banu; Demirok, Gulizar; Balta, Ozgur; Ornek, Firdevs

    2017-06-01

    The aim of this study was to evaluate the long-term outcome of Ahmed glaucoma valve (AGV) implant for elevated intraocular pressure (IOP) in pediatric patients with uveitis. This was a retrospective chart review. The study included 16 eyes (11 children) with uveitis. Success was defined as having IOP between 6 and 21 mm Hg with (qualified success) or without (complete success) antiglaucoma medications and without the need for further glaucoma or tube extraction surgery. Mean age of patients at the time of AGV implantation was 14.19 ± 3.25 years. AGV implantation was the first glaucoma surgical procedure in 12 eyes (75%). Average postoperative follow-up period was 64.46 ± 33.56 months. Mean preoperative IOP was 33.50 ± 7.30 mm Hg versus 12.69 ± 3.20 mm Hg at the last follow-up visit (p < 0.001). Three eyes (18.7%) were determined as cases of "failure" because of tube removal in 2 eyes and a second AGV implantation in 1 eye. The cumulative probability of complete success was 68.8% at 6 months, 56.3% at 12 months, 49.2% at 36 months, 42.2% at 48 months, and 35.2% at 84 months, and the cumulative probability of eyes without complication was 75.0% at 6 months, 66.7% at 24 months, 58.3% at 36 months, 48.6% at 48 months and 24.3% at 108 months based on Kaplan-Meier survival analysis. Although AGV implant is an effective choice in the management of elevated IOP in pediatric uveitis, antiglaucoma medications are frequently needed for control of IOP. Tube exposure is an important complication in the long term. Differential diagnosis between relapse of uveitis and endophthalmitis is important in patients who received AGV implantation. Copyright © 2017 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  10. Evaluation of success after second Ahmed glaucoma valve implantation.

    PubMed

    Nilforushan, Naveed; Yadgari, Maryam; Jazayeri, Anis Alsadat; Karimi, Nasser

    2016-03-01

    To evaluate the outcome of the second Ahmed glaucoma valve (AGV) surgery in eyes with failed previous AGV surgery. Retrospective case series. Following chart review, 36 eyes of 34 patients with second AGV implantation were enrolled in this study. The primary outcome measure was surgical success defined in terms of intraocular pressure (IOP) control using two criteria: Success was defined as IOP ≤21 mmHg (criterion 1) and IOP ≤16 mmHg (criterion 2), with at least 20% reduction in IOP, either with no medication (complete success) or with no more than two medications (qualified success). Kaplan-Meier survival analysis was used to determine the probability of surgical success. The average age of the patients was 32.7 years (range 4-65), and the mean duration of follow-up was 21.4 months (range 6-96). Preoperatively, the mean IOP was 26.94 mmHg (standard deviation [SD] 7.03), and the patients were using 2.8 glaucoma medications on average (SD 0.9). The mean IOP decreased significantly to 13.28 mmHg (SD 3.59) at the last postoperative visit (P = 0.00) while the patients needed even fewer glaucoma medications on average (1.4 ± 1.1, P = 0.00). Surgical success of second glaucoma drainage devices (Kaplan-Meier analysis), according to criterion 1, at 6, 12, 18, and 42 months was 94%, 85%, 80%, and 53% respectively, and according to criterion 2, was 94%, 85%, 75%, and 45%, respectively. Repeated AGV implantation seems to be a safe modality of treatment with acceptable success rate in cases with failed previous AGV surgery.

  11. Ahmed glaucoma valve in post-penetrating-keratoplasty glaucoma: A critically evaluated prospective clinical study

    PubMed Central

    Panda, Anita; Prakash, Vadivelu Jaya; Dada, Tanuj; Gupta, Anoop Kishore; Khokhar, Sudarshan; Vanathi, Murugesan

    2011-01-01

    Aim: The aim was to evaluate the outcome of Ahmed glaucoma valve (AGV) in post-penetrating-keratoplasty glaucoma (PKPG). Materials and Methods: In this prospective study, 20 eyes of 20 adult patients with post-PKPG with intraocular pressure (IOP) >21 mmHg, on two or more antiglaucoma medications, underwent AG (model FP7) implantation and were followed up for a minimum of 6 months. Absolute success was defined as 5

  12. Delicate Ag/V2O5/TiO2 ternary nanostructures as a high-performance photocatalyst

    NASA Astrophysics Data System (ADS)

    Zhu, Xiao-Dong; Zheng, Ya-Lun; Feng, Yu-Jie; Sun, Ke-Ning

    2018-02-01

    Here we report, for the first time, delicate ternary nanostructures consisting of TiO2 nanoplatelets co-doped with Ag and V2O5 nanoparticles. The relationship between the composition and the morphology is systematically studied. We find a remarkable synergistic effect among the three components, and the resulting delicate Ag/V2O5/TiO2 ternary nanostructures exhibit a superior photocatalytic performance over neat TiO2 nanoplatelets as well as Ag/TiO2 and V2O5/TiO2 binary nanostructures for the degradation of methyl orange. We believe our delicate Ag/V2O5/TiO2 ternary nanostructures may lay a basis for developing next-generating, high-performance composite photocatalysts.

  13. Evaluation of success after second Ahmed glaucoma valve implantation

    PubMed Central

    Nilforushan, Naveed; Yadgari, Maryam; Jazayeri, Anis Alsadat; Karimi, Nasser

    2016-01-01

    Purpose: To evaluate the outcome of the second Ahmed glaucoma valve (AGV) surgery in eyes with failed previous AGV surgery. Design: Retrospective case series. Patients and Methods: Following chart review, 36 eyes of 34 patients with second AGV implantation were enrolled in this study. The primary outcome measure was surgical success defined in terms of intraocular pressure (IOP) control using two criteria: Success was defined as IOP ≤21 mmHg (criterion 1) and IOP ≤16 mmHg (criterion 2), with at least 20% reduction in IOP, either with no medication (complete success) or with no more than two medications (qualified success). Kaplan–Meier survival analysis was used to determine the probability of surgical success. Results: The average age of the patients was 32.7 years (range 4–65), and the mean duration of follow-up was 21.4 months (range 6–96). Preoperatively, the mean IOP was 26.94 mmHg (standard deviation [SD] 7.03), and the patients were using 2.8 glaucoma medications on average (SD 0.9). The mean IOP decreased significantly to 13.28 mmHg (SD 3.59) at the last postoperative visit (P = 0.00) while the patients needed even fewer glaucoma medications on average (1.4 ± 1.1, P = 0.00). Surgical success of second glaucoma drainage devices (Kaplan–Meier analysis), according to criterion 1, at 6, 12, 18, and 42 months was 94%, 85%, 80%, and 53% respectively, and according to criterion 2, was 94%, 85%, 75%, and 45%, respectively. Conclusion: Repeated AGV implantation seems to be a safe modality of treatment with acceptable success rate in cases with failed previous AGV surgery. PMID:27146930

  14. Comparison of the Outcome of Silicone Ahmed Glaucoma Valve Implantation with a Surface Area between 96 and 184 mm2 in Adult Eyes

    PubMed Central

    Koh, Kyung Min; Hwang, Young Hoon; Jung, Jong Jin; Sohn, Yong Ho

    2013-01-01

    Purpose To compare the success rates, complications, and visual outcomes between silicone Ahmed glaucoma valve (AGV) implantation with 96 mm2 (FP8) or 184 mm2 (FP7) surface areas. Methods This study is a retrospective review of the records from 132 adult patients (134 eyes) that underwent silicone AGV implant surgery. Among them, the outcomes of 24 eyes from 24 patients with refractory glaucoma who underwent FP8 AGV implantation were compared with 76 eyes from 76 patients who underwent FP7 AGV implantation. Preoperative and postoperative data, including intraocular pressure (IOP), visual acuity, number of medications, and complications were compared between the 2 groups. Results There were no significant differences in baseline characteristics between the 2 groups (p > 0.05). The postoperative visual acuity of the patients in the FP8 group was better than that of the patients in the FP7 group in some early postoperative periods (p < 0.05); however, after 10 postoperative months, visual acuity was not significantly different through the 3-year follow-up period (p > 0.05). Postoperative IOP was not significantly different between the 2 groups (p > 0.05) except for IOP on postoperative day 1 (11.42 mmHg for the FP7 group and 7.42 mmHg for the FP8 group; p = 0.031). There was no statistical difference in success rates, final IOP, number of medications, or complication rates between the 2 groups (p > 0.05). Conclusions The FP7 and FP8 AGV implants showed no difference in terms of vision preservation, IOP reduction, and number of glaucoma medications required. PMID:24082774

  15. Admission glycemic variability correlates with in-hospital outcomes in diabetic patients with non-ST segment elevation acute coronary syndrome undergoing percutaneous coronary intervention

    PubMed Central

    Su, Gong; Zhang, Tao; Yang, Hongxia; Dai, Wenlong; Tian, Lei; Tao, Hong; Wang, Tao; Mi, Shuhua

    2018-01-01

    Objective The aim of this study is to evaluate the effects of admission glycemic variability (AGV) on in-hospital outcomes in diabetic patients with non-ST segment elevation acute coronary syndrome (NSTE-ACS) undergoing percutaneous coronary intervention (PCI). Methods We studied 759 diabetic patients with NSTE-ACS undergoing PCI. AGV was accessed based on the mean amplitude of glycemic excursions (MAGEs) in the first 24 hours after admission. Primary outcome was a composite of in-hospital events, all-cause mortality, new-onset myocardial infarction, acute heart failure, and stroke. Secondary outcomes were each of these considered separately. Predictive effects of AGV on the in-hospital outcomes in patients were analyzed. Results Patients with high MAGE levels had significantly higher incidence of total outcomes (9.9% vs. 4.8%, p=0.009) and all-cause mortality (2.3% vs. 0.4%, p=0.023) than those with low MAGE levels during hospitalization. Multivariable analysis revealed that AGV was significantly associated with incidence of in-hospital outcomes (Odds ratio=2.024, 95% CI 1.105-3.704, p=0.022) but hemoglobin A1c (HbA1c) was not. In the receiver-operating characteristic curve analysis for MAGE and HbA1c in predicting in-hospital outcomes, the area under the curve for MAGE (0.608, p=0.012) was superior to that for HbA1c (0.556, p=0.193). Conclusion High AGV levels may be closely correlated with increased in-hospital poor outcomes in diabetic patients with NSTE-ACS following PCI. PMID:29848920

  16. Intelligence Level Performance Standards Research for Autonomous Vehicles

    PubMed Central

    Bostelman, Roger B.; Hong, Tsai H.; Messina, Elena

    2017-01-01

    United States and European safety standards have evolved to protect workers near Automatic Guided Vehicles (AGV’s). However, performance standards for AGV’s and mobile robots have only recently begun development. Lessons can be learned from research and standards efforts for mobile robots applied to emergency response and military applications. Research challenges, tests and evaluations, and programs to develop higher intelligence levels for vehicles can also used to guide industrial AGV developments towards more adaptable and intelligent systems. These other efforts also provide useful standards development criteria for AGV performance test methods. Current standards areas being considered for AGVs are for docking, navigation, obstacle avoidance, and the ground truth systems that measure performance. This paper provides a look to the future with standards developments in both the performance of vehicles and the dynamic perception systems that measure intelligent vehicle performance. PMID:28649189

  17. Initial Clinical Experience with Ahmed Valve Implantation in Refractory Pediatric Glaucoma

    PubMed

    Novak-Lauš, Katia; Škunca Herman, Jelena; Šimić Prskalo, Marija; Jurišić, Darija; Mandić, Zdravko

    2016-12-01

    The purpose is to report on the safety and efficacy of Ahmed Glaucoma Valve (AGV, New World Medical, Inc., Rancho Cucamonga, CA, USA) implantation for the management of refractory pediatric glaucoma observed during one-year follow up period. A retrospective chart review was conducted on 10 eyes, all younger than 11 years, with pediatric glaucoma that underwent AGV implantation for medicamentously uncontrolled intraocular pressure (IOP) between 2010 and 2014. Outcome measures were control of IOP below 23 mm Hg (with or without antiglaucoma medications) and changes in visual acuity. Complications were recorded. After AGV implantation, IOP values ranged from 18 mm Hg to 23 mm Hg (except for one eye with postoperative hypotonia due to suprachoroid hemorrhage, where the postoperative IOP value was 4 mm Hg). The number of antiglaucoma medications was reduced, i.e. four patients had two medications, one patient had one medication, and the others did not need antiglaucoma medication on the last follow-up visit. One eye had suprachoroid hemorrhage, one eye had long-term persistent uveitic membrane, and two eyes had tube-cornea touch. In conclusion, AGV implantation appears to be a viable option for the management of refractory pediatric glaucoma and shows success in IOP control. However, there was a relatively high complication rate limiting the overall success rate.

  18. Investigation of Matlab® as platform in navigation and control of an Automatic Guided Vehicle utilising an omnivision sensor.

    PubMed

    Kotze, Ben; Jordaan, Gerrit

    2014-08-25

    Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed.

  19. Local navigation and fuzzy control realization for autonomous guided vehicle

    NASA Astrophysics Data System (ADS)

    El-Konyaly, El-Sayed H.; Saraya, Sabry F.; Shehata, Raef S.

    1996-10-01

    This paper addresses the problem of local navigation for an autonomous guided vehicle (AGV) in a structured environment that contains static and dynamic obstacles. Information about the environment is obtained via a CCD camera. The problem is formulated as a dynamic feedback control problem in which speed and steering decisions are made on the fly while the AGV is moving. A decision element (DE) that uses local information is proposed. The DE guides the vehicle in the environment by producing appropriate navigation decisions. Dynamic models of a three-wheeled vehicle for driving and steering mechanisms are derived. The interaction between them is performed via the local feedback DE. A controller, based on fuzzy logic, is designed to drive the vehicle safely in an intelligent and human-like manner. The effectiveness of the navigation and control strategies in driving the AGV is illustrated and evaluated.

  20. Investigation of Matlab® as Platform in Navigation and Control of an Automatic Guided Vehicle Utilising an Omnivision Sensor

    PubMed Central

    Kotze, Ben; Jordaan, Gerrit

    2014-01-01

    Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed. PMID:25157548

  1. Polypropylene vs silicone Ahmed valve with adjunctive mitomycin C in paediatric age group: a prospective controlled study

    PubMed Central

    El Sayed, Y; Awadein, A

    2013-01-01

    Purpose To compare the results of silicone and polypropylene Ahmed glaucoma valves (AGV) implanted during the first 10 years of life. Methods A prospective study was performed on 50 eyes of 33 patients with paediatric glaucoma. Eyes were matched to either polypropylene or silicone AGV. In eyes with bilateral glaucoma, one eye was implanted with polypropylene and the other eye was implanted with silicone AGV. Results Fifty eyes of 33 children were reviewed. Twenty five eyes received a polypropylene valve, and 25 eyes received a silicone valve. Eyes implanted with silicone valves achieved a significantly lower intraocular pressure (IOP) compared with the polypropylene group at 6 months, 1 year, and 2 years postoperatively. The average survival time was significantly longer (P=0.001 by the log-rank test) for the silicone group than for the polypropylene group and the cumulative probability of survival by the log-rank test at the end of the second year was 80% (SE: 8.0, 95% confidence interval (CI): 64–96%) in the silicone group and 56% (SE: 9.8, 95% CI: 40–90%) in the polypropylene group. The difference in the number of postoperative interventions and complications between both groups was statistically insignificant. Conclusion Silicone AGVs can achieve better IOP control, and longer survival with less antiglaucoma drops compared with polypropylene valves in children younger than 10 years. PMID:23579403

  2. Hypertensive phase and early complications after Ahmed glaucoma valve implantation with intraoperative subtenon triamcinolone acetonide.

    PubMed

    Turalba, Angela V; Pasquale, Louis R

    2014-01-01

    To evaluate intraoperative subtenon triamcinolone acetonide (TA) as an adjunct to Ahmed glaucoma valve (AGV) implantation. Retrospective comparative case series. Forty-two consecutive cases of uncontrolled glaucoma undergoing AGV implantation: 19 eyes receiving intraoperative subtenon TA and 23 eyes that did not receive TA. A retrospective chart review was performed on consecutive pseudophakic adult patients with uncontrolled glaucoma undergoing AGV with and without intraoperative subtenon TA injection by a single surgeon. Clinical data were collected from 42 eyes and analyzed for the first 6 months after surgery. Primary outcomes included intraocular pressure (IOP) and number of glaucoma medications prior to and after AGV implantation. The hypertensive phase (HP) was defined as an IOP measurement of greater than 21 mmHg (with or without medications) during the 6-month postoperative period that was not a result of tube obstruction, retraction, or malfunction. Postoperative complications and visual acuity were analyzed as secondary outcome measures. Five out of 19 (26%) TA cases and 12 out of 23 (52%) non-TA cases developed the HP (P=0.027). Mean IOP (14.2±4.6 in TA cases versus [vs] 14.7±5.0 mmHg in non-TA cases; P=0.78), and number of glaucoma medications needed (1.8±1.3 in TA cases vs 1.6±1.1 in the comparison group; P=0.65) were similar between both groups at 6 months. Although rates of serious complications did not differ between the groups (13% in the TA group vs 16% in the non-TA group), early tube erosion (n=1) and bacterial endophthalmitis (n=1) were noted with TA but not in the non-TA group. Subtenon TA injection during AGV implantation may decrease the occurrence of the HP but does not alter the ultimate IOP outcome and may pose increased risk of serious complications within the first 6 months of surgery.

  3. Hypertensive phase and early complications after Ahmed glaucoma valve implantation with intraoperative subtenon triamcinolone acetonide

    PubMed Central

    Turalba, Angela V; Pasquale, Louis R

    2014-01-01

    Objective To evaluate intraoperative subtenon triamcinolone acetonide (TA) as an adjunct to Ahmed glaucoma valve (AGV) implantation. Design Retrospective comparative case series. Participants Forty-two consecutive cases of uncontrolled glaucoma undergoing AGV implantation: 19 eyes receiving intraoperative subtenon TA and 23 eyes that did not receive TA. Methods A retrospective chart review was performed on consecutive pseudophakic adult patients with uncontrolled glaucoma undergoing AGV with and without intraoperative subtenon TA injection by a single surgeon. Clinical data were collected from 42 eyes and analyzed for the first 6 months after surgery. Main outcome measures Primary outcomes included intraocular pressure (IOP) and number of glaucoma medications prior to and after AGV implantation. The hypertensive phase (HP) was defined as an IOP measurement of greater than 21 mmHg (with or without medications) during the 6-month postoperative period that was not a result of tube obstruction, retraction, or malfunction. Postoperative complications and visual acuity were analyzed as secondary outcome measures. Results Five out of 19 (26%) TA cases and 12 out of 23 (52%) non-TA cases developed the HP (P=0.027). Mean IOP (14.2±4.6 in TA cases versus [vs] 14.7±5.0 mmHg in non-TA cases; P=0.78), and number of glaucoma medications needed (1.8±1.3 in TA cases vs 1.6±1.1 in the comparison group; P=0.65) were similar between both groups at 6 months. Although rates of serious complications did not differ between the groups (13% in the TA group vs 16% in the non-TA group), early tube erosion (n=1) and bacterial endophthalmitis (n=1) were noted with TA but not in the non-TA group. Conclusions Subtenon TA injection during AGV implantation may decrease the occurrence of the HP but does not alter the ultimate IOP outcome and may pose increased risk of serious complications within the first 6 months of surgery. PMID:25050061

  4. Long-term results of Ahmed glaucoma valve implantation in Egyptian population

    PubMed Central

    Elhefney, Eman; Mokbel, Tharwat; Abou Samra, Waleed; Kishk, Hanem; Mohsen, Tarek; El-Kannishy, Amr

    2018-01-01

    AIM To evaluate the long-term results and complications of Ahmed glaucoma valve (AGV) implantation in a cohort of Egyptian patients. METHODS A retrospective study of 124 eyes of 99 patients with refractory glaucoma who underwent AGV implantation and had a minimum follow-up of 5y was performed. All patients underwent complete ophthalmic examination and intraocular pressure (IOP) measurement before surgery and at 1d, weekly for the 1st month, 3, 6mo, and 1y after surgery and yearly afterward for 5y. IOP was measured by Goldmann applanation tonometry and/or Tono-Pen. Complications and the number of anti-glaucoma medications needed were recorded. Success was defined as IOP less than 21 mm Hg with or without anti-glaucoma medication and without additional glaucoma surgery. RESULTS Mean age was 23.1±19.9y. All eyes had at least one prior glaucoma surgery. IOP was reduced from a mean of 37.2±6.8 to 19.2±5.2 mm Hg after 5y follow-up with a reduced number of medications from 2.64±0.59 to 1.81±0.4. Complete and qualified success rates were 31.5% and 46.0% respectively at the end of follow-up. The most common complications were encapsulated cyst formation in 51 eyes (41.1%), complicated cataract in 9 eyes (7.25%), recessed tube in 8 eyes (6.45%), tube exposure in 6 eyes (4.8%) and corneal touch in 6 eyes (4.8%). Other complications included extruded AGV, endophthalmitis and persistent hypotony. Each of them was recorded in only 2 eyes (1.6%). CONCLUSION Although refractory glaucoma is a difficult problem to manage, AGV is effective and relatively safe procedure in treating refractory glaucoma in Egyptian patients with long-term follow-up. Encapsulated cyst formation was the most common complication, which limits successful IOP control after AGV implantation. However, effective complications management can improve the rate of success. PMID:29600175

  5. The Ahmed shunt versus the Baerveldt shunt for refractory glaucoma: a meta-analysis.

    PubMed

    Wang, Shiming; Gao, Xiaoming; Qian, Nana

    2016-06-08

    The purpose of this study was to compare the efficacy and tolerability of the Ahmed glaucoma valve (AGV) implant and the Baerveldt implant for the treatment of refractory glaucoma. We comprehensively searched four databases, including PubMed, EMBASE, Web of Science, and the Cochrane Library databases, selecting the relevant studies. The continuous variables, namely, intraocular pressure reduction (IOPR) and a reduction in glaucoma medication, were pooled by the weighted mean differences (WMDs), and the dichotomous outcomes, including success rates and tolerability estimates, were pooled by the odds ratio (ORs). A total of 929 patients from six studies were included. The WMDs of the IOPR between the AGV implant and the Baerveldt implant were 1.58 [95 % confidence interval (CI): -2.99 to 6.15] at 6 months, -1.01 (95 % CI: -3.40 to 1.98) at 12 months, -0.54 (95 % CI: -4.89 to 3.82) at 24 months, and -0.47 (95 % CI: -3.29 to 2.35) at 36 months. No significant difference was detected between the two groups at any point in time. The pooled ORs comparing the AGV implant with the Baerveldt implant were 0.51 (95 % CI: 0.33 to 0.80) for the complete success rate and 0.67 (95 % CI: 0.50 to 0.91) for qualified success rate. The Baerveldt implant was associated with a reduction in glaucoma medication at -0.51 (95 % CI: -0.90 to -0.12). There were no significant differences between the AGV implant and the Baerveldt implant on the rates of adverse events. The Baerveldt implant is more effective in both its surgical success rate and reducing glaucoma medication, but it is comparable to the AGV implant in lowering IOP. Both implants may have comparable incidences of adverse events.

  6. Long-term results of Ahmed glaucoma valve implantation in Egyptian population.

    PubMed

    Elhefney, Eman; Mokbel, Tharwat; Abou Samra, Waleed; Kishk, Hanem; Mohsen, Tarek; El-Kannishy, Amr

    2018-01-01

    To evaluate the long-term results and complications of Ahmed glaucoma valve (AGV) implantation in a cohort of Egyptian patients. A retrospective study of 124 eyes of 99 patients with refractory glaucoma who underwent AGV implantation and had a minimum follow-up of 5y was performed. All patients underwent complete ophthalmic examination and intraocular pressure (IOP) measurement before surgery and at 1d, weekly for the 1 st month, 3, 6mo, and 1y after surgery and yearly afterward for 5y. IOP was measured by Goldmann applanation tonometry and/or Tono-Pen. Complications and the number of anti-glaucoma medications needed were recorded. Success was defined as IOP less than 21 mm Hg with or without anti-glaucoma medication and without additional glaucoma surgery. Mean age was 23.1±19.9y. All eyes had at least one prior glaucoma surgery. IOP was reduced from a mean of 37.2±6.8 to 19.2±5.2 mm Hg after 5y follow-up with a reduced number of medications from 2.64±0.59 to 1.81±0.4. Complete and qualified success rates were 31.5% and 46.0% respectively at the end of follow-up. The most common complications were encapsulated cyst formation in 51 eyes (41.1%), complicated cataract in 9 eyes (7.25%), recessed tube in 8 eyes (6.45%), tube exposure in 6 eyes (4.8%) and corneal touch in 6 eyes (4.8%). Other complications included extruded AGV, endophthalmitis and persistent hypotony. Each of them was recorded in only 2 eyes (1.6%). Although refractory glaucoma is a difficult problem to manage, AGV is effective and relatively safe procedure in treating refractory glaucoma in Egyptian patients with long-term follow-up. Encapsulated cyst formation was the most common complication, which limits successful IOP control after AGV implantation. However, effective complications management can improve the rate of success.

  7. Dual infection by streptococcus and atypical mycobacteria following Ahmed glaucoma valve surgery.

    PubMed

    Rao, Aparna; Wallang, Batriti; Padhy, Tapas Ranjan; Mittal, Ruchi; Sharma, Savitri

    2013-07-01

    To report a case of late postoperative endophthalmitis caused by Streptococcus pneumoniae and conjunctival necrosis by Streptococcus pneumoniae and Mycobacterium fortuitum following Ahmed glaucoma valve (AGV) surgery in a young patient. Case report of a 13-year-old boy with purulent exudates and extensive conjunctival necrosis two months following amniotic membrane graft and conjunctival closure (for conjunctival retraction post AGV for secondary glaucoma). The conjunctiva showed extensive necrosis causing exposure of the tube and plate associated with frank exudates in the area adjoining the plate and anterior chamber mandating explantation of the plate along with intravitreal antibiotics. The vitreous aspirate grew Streptococcus pneumoniae while Streptococcus pneumoniae with Mycobacterium fortuitum was isolated from the explanted plate. Despite adequate control of infection following surgery, the final visual outcome was poor owing to disc pallor. Conjunctival necrosis and retraction post-AGV can cause late postoperative co-infections by fulminant and slow-growing organisms. A close follow-up is therefore essential in these cases to prevent sight-threatening complications.

  8. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †

    PubMed Central

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-01

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. PMID:29320434

  9. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.

    PubMed

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-10

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.

  10. Outcomes of Ahmed Glaucoma Valve Revision in Pediatric Glaucoma.

    PubMed

    Al-Omairi, Ahmed Mansour; Al Ameri, Aliah H; Al-Shahwan, Sami; Khan, Arif O; Al-Jadaan, Ibrahim; Mousa, Ahmed; Edward, Deepak P

    2017-11-01

    Encapsulation of the Ahmed glaucoma valve (AGV) plate is a common cause for postoperative elevation of intraocular pressure, especially in children. Many reports have described the outcomes of AGV revision in adults. However, the outcomes of AGV revision in children are poorly documented. The aim of this study was to determine the outcomes of AGV revision in children. Retrospective cross-sectional study. A retrospective chart review of patients less than 15 years of age who underwent AGV revision with a minimum postoperative follow-up of 6 months was conducted. Outcome measures included reduction in intraocular pressure from baseline, survival analysis, and reduction in the number of antiglaucoma medications. Postoperative complications were also noted. Complete success was defined as an IOP of 21 mm Hg or less without medications, while qualified success was defined as having an IOP of 21 mm Hg or less with medications. A total of 44 eyes met the inclusion criteria. Primary congenital glaucoma was present in 39 eyes (88.6%), aphakic glaucoma in 4 eyes (9.1%), and Peters anomaly-associated glaucoma in 1 eye (2.3%). The mean number of previous surgeries was 1.4, and the mean age was 6.7 years (range, 1.9-13 years) with a median follow-up of 12 months (range, 6-24 months). The IOP was reduced from a preoperative mean of 30.4 (± 10.3) to 24.9 (± 10.6) mm Hg at 6 months postoperatively. Kaplan-Meier analysis showed that the complete success rate at 1 month was 100% followed by a rapid decline at 6 months to 38.6%, 27.7% at 1 year, and 5.5% at 2 years. Qualified success rate was 100% at 1 month followed by a 6-month and 1-year survival rate of approximately 50% and a 2-year survival rate of approximately 16%. The median survival time was 14 months. No specific risk factors for failure were identified. Visual acuity remained unchanged following revision. The most common complication was recurrence of encapsulation with elevated IOP (15.9%). Other complications included hyphema (n = 3; 6.8%), endophthalmitis (n = 1; 2.3%), wound leak (n = 1; 2.3%), and choroidal detachment (n = 2; 4.5%). Although the short-term success rate of AGV revision in children is high, with longer follow-up the success rate decreases significantly. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Robust H∞ output-feedback control for path following of autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Hu, Chuan; Jing, Hui; Wang, Rongrong; Yan, Fengjun; Chadli, Mohammed

    2016-03-01

    This paper presents a robust H∞ output-feedback control strategy for the path following of autonomous ground vehicles (AGVs). Considering the vehicle lateral velocity is usually hard to measure with low cost sensor, a robust H∞ static output-feedback controller based on the mixed genetic algorithms (GA)/linear matrix inequality (LMI) approach is proposed to realize the path following without the information of the lateral velocity. The proposed controller is robust to the parametric uncertainties and external disturbances, with the parameters including the tire cornering stiffness, vehicle longitudinal velocity, yaw rate and road curvature. Simulation results based on CarSim-Simulink joint platform using a high-fidelity and full-car model have verified the effectiveness of the proposed control approach.

  12. WETTING AND REACTIVE AIR BRAZING OF BSCF FOR OXYGEN SEPARATION DEVICES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaDouceur, Richard M.; Meier, Alan; Joshi, Vineet V.

    Reactive air brazes Ag-CuO and Ag-V2O5 were evaluated for brazing Ba0.5Sr0.5Co0.8Fe0.2O(3-δ) (BSCF). BSCF has been determined in previous work to have the highest potential mixed ionic/electronic conducting (MIEC) ceramic material based on the design and oxygen flux requirements of an oxy-fuel plant such as an integrated gasification combined cycle (IGCC) used to facilitate high-efficiency carbon capture. Apparent contact angles were observed for Ag-CuO and Ag-V2O5 mixtures at 1000 °C for isothermal hold times of 0, 10, 30, and 60 minutes. Wetting apparent contact angles (θ<90°) were obtained for 1%, 2%, and 5% Ag-CuO and Ag-V2O5 mixtures, with the apparent contactmore » angles between 74° and 78° for all compositions and furnace dwell times. Preliminary microstructural analysis indicates that two different interfacial reactions are occurring: Ag-CuO interfacial microstructures revealed the same dissolution of copper oxide into the BSCF matrix to form copper-cobalt-oxygen rich dissolution products along the BSCF grain boundaries and Ag-V2O5 interfacial microstructures revealed the infiltration and replacement of cobalt and iron with vanadium and silver filling pores in the BSCF microstructure. The Ag-V2O5 interfacial reaction product layer was measured to be significantly thinner than the Ag-CuO reaction product layer. Using a fully articulated four point flexural bend test fixture, the flexural fracture strength for BSCF was determined to be 95 ± 33 MPa. The fracture strength will be used to ascertain the success of the reactive air braze alloys. Based on these results, brazes were fabricated and mechanically tested to begin to optimize the brazing parameters for this system. Ag-2.5% CuO braze alloy with a 2.5 minute thermal cycle achieved a hermetic seal with a joint flexural strength of 34 ± 15 MPa and Ag-1% V2O5 with a 30 minute thermal cycle had a joint flexural strength of 20 ± 15 MPa.« less

  13. Ahmed Glaucoma Valve Implantation in Vitrectomized Eyes.

    PubMed

    Erçalık, Nimet Yeşim; İmamoğlu, Serhat

    2018-01-01

    To evaluate the outcomes of Ahmed glaucoma valve (AGV) implantation in vitrectomized eyes. The medical records of 13 eyes that developed glaucoma due to emulsified silicon oil or neovascularization following pars plana vitrectomy and underwent AGV implantation were retrospectively reviewed. The main outcome measures were intraocular pressure (IOP), best-corrected visual acuity (BCVA), number of antiglaucoma medications, and postoperative complications. Surgical success was defined as last IOP ≤21 mmHg or ≥6 mmHg and without loss of light perception. The mean follow-up duration was 11.7 ± 5.5 (range, 6-23) months. The mean IOP before the AGV implantation was 37.9 ± 6.7 mmHg with an average of 3.5 ± 1.2 drugs. At the final visit, the mean IOP was 15.9 ± 4.6 mmHg ( p =0.001) and the mean number of glaucoma medications decreased to 2.3 ± 1.3 ( p =0.021). At the last visit, 11 eyes (84.4%) had stable or improved VA and one eye (7.7%) had a final VA of no light perception. Surgical success was achieved in 11 of the 13 eyes (84.4%). Postoperative complications were bleb encapsulation (69.2%), early hypotony (38.5%), hyphema (23.1%), decompression retinopathy (23.1%), choroidal detachment (15.4%), intraocular hemorrhage (7.7%), and late endophthalmitis (7.7%). One eye (7.7%) was enucleated because of late endophthalmitis. Despite complications necessitating medical and surgical interventions, vitrectomized eyes were effectively managed with AGV implantation.

  14. Clinical outcomes of trabeculectomy vs. Ahmed glaucoma valve implantation in patients with penetrating keratoplasty : (Trabeculectomy vs. Ahmed galucoma valve in patients with penetrating keratoplasty).

    PubMed

    Akdemir, Mehmet Orcun; Acar, Banu Torun; Kokturk, Furuzan; Acar, Suphi

    2016-08-01

    The aim of this study was to compare the visual outcomes, intraocular pressure (IOP), and endothelial cell loss caused by trabeculectomy (TRAB) and Ahmed glaucoma valve (AGV) implantation in patients who had previously undergone penetrating keratoplasty (PKP). The data from all patients who underwent surgical treatment of glaucoma after PKP were reviewed at the Cornea Department of Haydarpasa Numune Education and Research Hospital. Eighteen patients who had undergone surgical treatment of glaucoma after PKP were included in this retrospective study. Time between PKP and glaucoma surgeries, visual acuity results, IOP results, endothelial cell counts (ECC) before the surgery, at 1st, 6th, and 12th month of surgery were recorded. Differences between two groups were evaluated. Mean loss of ECC was 315 cells/mm(2) in the AGV group and 197 cells/mm(2) in TRAB group at 12th month of glaucoma surgery. The difference between endothelial cell loss at 12th month of surgery was statistically significant and higher in AGV group (p < 0.001). The decrease in IOP was 64.2 % in AGV group and 46.9 % in TRAB group at 12th month of surgery. Both differences were statistically significant between 2 groups (p = 0.001, 0.001). TRAB successfully decreased both the IOP and endothelial cell loss in patients with post-PKP glaucoma. Ahmed glaucoma valve had a significantly better IOP lowering but higher endothelial cell loss effect.

  15. Long-term clinical outcomes of Ahmed valve implantation in patients with refractory glaucoma.

    PubMed

    Lee, Chang Kyu; Ma, Kyoung Tak; Hong, Young Jae; Kim, Chan Yun

    2017-01-01

    To evaluate the long-term efficacy of intraocular pressure (IOP) reduction and complications of Ahmed Glaucoma Valve (AGV) implantation in patients with refractory glaucoma. Retrospective study. The study involved 302 refractory glaucoma patients who underwent AGV implantation and had a minimum follow-up of 6 months between March 1995 and December 2013. An operation was defined as successful when (1) the postoperative IOP remained between 5 and 21 mmHg and was reduced 30% compared to the baseline IOP with or without medication, (2) there was no loss of light perception or vision-threatening severe complications, and (3) no additional filtering or aqueous drainage surgery was required. Clinical records were reviewed. IOP, anti-glaucoma medications, and complications. The mean follow-up period was 62.25 months (range, 6 to 190 months). The cumulative probability of success was 89% at 6 months, 81% at 1 year, 66% at 3 years, 44% at 10 years, and 26% at 15 years. IOP was reduced from a mean of 32.2 ± 10.5 mmHg to 18.6 ± 9.1 mmHg at 1 month, 15.2 ± 7.0 mmHg at 6 months, and 14.2 ± 3.5 mmHg at 15 years. Surgical failures were significantly increased when preoperative IOP was high, and when severe complications occurred after AGV implantation (P < 0.05). AGV implantation was successful for IOP control in patients with refractive glaucoma in the long term. However, the success rate of surgery decreased over time. Preoperative high IOP and severe complications related to the operation were significant risk factors for failure.

  16. Histopathologic and immunohistochemical features of capsular tissue around failed Ahmed glaucoma valves.

    PubMed

    Mahale, Alka; Fikri, Fatma; Al Hati, Khitam; Al Shahwan, Sami; Al Jadaan, Ibrahim; Al Katan, Hind; Khandekar, Rajiv; Maktabi, Azza; Edward, Deepak P

    2017-01-01

    Impervious encapsulation around Ahmed glaucoma valve (AGV) results in surgical failure raising intraocular pressure (IOP). Dysregulation of extracellular matrix (ECM) molecules and cellular factors might contribute to increased hydraulic resistance to aqueous drainage. Therefore, we examined these molecules in failed AGV capsular tissue. Immunostaining for ECM molecules (collagen I, collagen III, decorin, lumican, chondroitin sulfate, aggrecan and keratan sulfate) and cellular factors (αSMA and TGFβ) was performed on excised capsules from failed AGVs and control tenon's tissue. Staining intensity of ECM molecules was assessed using Image J. Cellular factors were assessed based on positive cell counts. Histopathologically two distinct layers were visible in capsules. The inner layer (proximal to the AGV) showed significant decrease in most ECM molecules compared to outer layer. Furthermore, collagen III (p = 0.004), decorin (p = 0.02), lumican (p = 0.01) and chondroitin sulfate (p = 0.02) was significantly less in inner layer compared to tenon's tissue. Outer layer labelling however was similar to control tenon's for most ECM molecules. Significantly increased cellular expression of αSMA (p = 0.02) and TGFβ (p = 0.008) was detected within capsular tissue compared to controls. Our results suggest profibrotic activity indicated by increased αSMA and TGFβ expression and decreased expression of proteoglycan (decorin and lumican) and glycosaminoglycans (chondroitin sulfate). Additionally, we observed decreased collagen III which might reflect increased myofibroblast contractility when coupled with increased TGFβ and αSMA expression. Together these events lead to tissue dysfunction potentially resulting in hydraulic resistance that may affect aqueous flow through the capsular wall.

  17. Tenon advancement and duplication technique to prevent postoperative Ahmed valve tube exposure in patients with refractory glaucoma.

    PubMed

    Tamcelik, Nevbahar; Ozkok, Ahmet; Sarıcı, Ahmet Murat; Atalay, Eray; Yetik, Huseyin; Gungor, Kivanc

    2013-07-01

    To present and compare the long-term results of Dr. Tamcelik's previously described technique of Tenon advancement and duplication with the conventional Ahmed glaucoma valve (AGV) implantation technique in patients with refractory glaucoma. This study was a multicenter, retrospective case series that included 303 eyes of 276 patients with refractory glaucoma who underwent glaucoma valve implantation surgery. The patients were divided into three groups according to the surgical technique applied and the outcomes compared. In group 1, 96 eyes of 86 patients underwent AGV implant surgery without patch graft; in group 2, 78 eyes of 72 patients underwent AGV implant surgery with donor scleral patch; in group 3, 129 eyes of 118 patients underwent Ahmed valve implant surgery with "combined short scleral tunnel with Tenon advancement and duplication technique". The endpoint assessed was tube exposure through the conjunctiva. In group 1, conjunctival tube exposure was seen in 11 eyes (12.9 %) after a mean 9.2 ± 3.7 years of follow-up. In group 2, conjunctival tube exposure was seen in six eyes (2.2 %) after a mean 8.9 ± 3.3 years of follow-up. In group 3, there was no conjunctival exposure after a mean 7.8 ± 2.8 years of follow-up. The difference between the groups was statistically significant. (P = 0.0001, Chi-square test). This novel surgical technique combining a short scleral tunnel with Tenon advancement and duplication was found to be effective and safe to prevent conjunctival tube exposure after AGV implantation surgery in patients with refractory glaucoma.

  18. Ahmed Glaucoma Valve Implantation in Vitrectomized Eyes

    PubMed Central

    İmamoğlu, Serhat

    2018-01-01

    Purpose To evaluate the outcomes of Ahmed glaucoma valve (AGV) implantation in vitrectomized eyes. Materials and Methods The medical records of 13 eyes that developed glaucoma due to emulsified silicon oil or neovascularization following pars plana vitrectomy and underwent AGV implantation were retrospectively reviewed. The main outcome measures were intraocular pressure (IOP), best-corrected visual acuity (BCVA), number of antiglaucoma medications, and postoperative complications. Surgical success was defined as last IOP ≤21 mmHg or ≥6 mmHg and without loss of light perception. Results The mean follow-up duration was 11.7 ± 5.5 (range, 6–23) months. The mean IOP before the AGV implantation was 37.9 ± 6.7 mmHg with an average of 3.5 ± 1.2 drugs. At the final visit, the mean IOP was 15.9 ± 4.6 mmHg (p=0.001) and the mean number of glaucoma medications decreased to 2.3 ± 1.3 (p=0.021). At the last visit, 11 eyes (84.4%) had stable or improved VA and one eye (7.7%) had a final VA of no light perception. Surgical success was achieved in 11 of the 13 eyes (84.4%). Postoperative complications were bleb encapsulation (69.2%), early hypotony (38.5%), hyphema (23.1%), decompression retinopathy (23.1%), choroidal detachment (15.4%), intraocular hemorrhage (7.7%), and late endophthalmitis (7.7%). One eye (7.7%) was enucleated because of late endophthalmitis. Conclusions Despite complications necessitating medical and surgical interventions, vitrectomized eyes were effectively managed with AGV implantation. PMID:29862068

  19. Short-term to Long-term Results of Ahmed Glaucoma Valve Implantation for Uveitic Glaucoma Secondary to Behçet Disease.

    PubMed

    Yakin, Mehmet; Eksioglu, Umit; Sungur, Gulten; Satana, Banu; Demirok, Gulizar; Ornek, Firdevs

    2017-01-01

    To evaluate short-term to long-term outcomes of Ahmed glaucoma valve (AGV) implantation in the management of uveitic glaucoma (UG) secondary to Behçet disease (BD). A retrospective chart review of 47 eyes of 35 patients with UG secondary to BD who underwent AGV implantation was conducted. Success was defined as having an intraocular pressure (IOP) between 6 and 21 mm Hg with (qualified success) or without (complete success) antiglaucomatous medications and without need for further glaucoma surgery. Mean postoperative follow-up was 57.72±26.13 months. Mean preoperative IOP was 35.40±8.33 mm Hg versus 12.28±2.90 mm Hg at the last follow-up visit (P<0.001). Mean number of preoperative topical antiglaucomatous medications was 2.96±0.29 versus 0.68±1.12 at the last follow-up visit (P<0.001). In all eyes, IOP could be maintained between 6 and 21 mm Hg with or without antiglaucomatous medications during follow-up. The cumulative probability of complete success was 46.8% at 6 months, 40.4% at 12 months, and 35.9% at 36 months, and the cumulative probability of eyes without complication was 53.2% at 6 months, 46.5% at 12 months, and 39.6% at 24 months postoperatively based on Kaplan-Meier survival analysis. No persistent or irreparable complications were observed. This study includes one of the largest series of AGV implantation in the management of UG with the longest follow-up reported. AGV implantation can be considered as a primary surgical option in the management of UG secondary to BD with 100% total success rate (with or without medications).

  20. Combined Ahmed Glaucoma Valve Placement, Intravitreal Fluocinolone Acetonide Implantation and Cataract Extraction for Chronic Uveitis.

    PubMed

    Chang, Ingrid T; Gupta, Divakar; Slabaugh, Mark A; Vemulakonda, Gurunadh A; Chen, Philip P

    2016-10-01

    To report the outcomes of combined Ahmed glaucoma valve (AGV) placement, intravitreal fluocinolone acetonide implant, and cataract extraction procedure in the treatment of chronic noninfectious uveitis. Retrospective case series of patients with chronic noninfectious uveitis who underwent AGV placement, intravitreal fluocinolone acetonide implantation, and cataract extraction in a single surgical session performed at 1 institution from January 2009 to November 2014. Outcome measures included intraocular pressure (IOP) and glaucoma medication use. Secondary outcome measures included visual acuity, systemic anti-inflammatory medications, number of uveitis flares, and complications. Fifteen eyes of 10 patients were studied, with a mean age of 40.3±15.7 and mean follow-up duration of 26 months (range, 13 to 39 mo). Before surgery, the IOP was 18.5±7.3 mm Hg and patients were using 1.5±1.5 topical glaucoma medications. At the 12-month follow-up, IOP was 12.8±3.2 mm Hg (P=0.01) and patients were using 0.5±0.8 (P=0.03) topical glaucoma medications. At 36 months of follow-up, late, nonsustained hypotony had occurred in 3 eyes (20%), and 1 eye (6%) had received a second AGV for IOP control. Before treatment, patients had 2.7±1.5 uveitis flares in the year before surgery while on an average of 2.1±0.6 systemic anti-inflammatory medications, which decreased to an average of 0.1±0.3 (P<0.01) flares the year after surgery while on an average of 0.4±1.1 (P<0.01) systemic medications. Combined AGV, intravitreal fluocinolone acetonide implant, and cataract extraction is effective in controlling IOP and reducing the number of glaucoma medications at 12 months after treatment in patients with chronic uveitis.

  1. Outcomes of Ahmed Valve Implant Following a Failed Initial Trabeculotomy and Trabeculectomy in Refractory Primary Congenital Glaucoma

    PubMed Central

    Dave, Paaraj; Senthil, Sirisha; Choudhari, Nikhil; Sekhar, Garudadri Chandra

    2015-01-01

    Purpose: The aim was to report the outcome of Ahmed glaucoma valve (AGV) (New World Medical, Inc., Rancho Cucamonga, CA, USA) implantation as a surgical intervention following an initial failed combined trabeculotomy + trabeculectomy (trab + trab) in refractory primary congenital glaucoma (RPCG). Materials and Methods: Retrospective chart review of 11 eyes of 8 patients who underwent implantation of AGV (model FP8) for RPCG between 2009 and 2011. Prior trab + trab had failed in all the eyes. Success was defined as an intraocular pressure (IOP) >5 and ≤ 18 mmHg during examination under anesthesia with or without medications and without serious complications or additional glaucoma surgery. Results: The mean age at AGV implantation was 15.4 ± 4.9 months. The mean preoperative IOP was 28 ± 5.7 mmHg which reduced to 13.6 ± 3.4 mmHg postoperatively at the last follow-up (P < 0.0001). The number of topical antiglaucoma medications reduced from a mean of 2.6 ± 0.5 to 1.6 ± 0.9 postoperatively (P = 0.009). The definition of qualified success was met in 10 (90%) eyes. One eye developed a shallow anterior chamber with choroidal detachment at 1-week, which resolved spontaneously with medications. None of the eyes developed a hypertensive phase. One eye had a long tube resulting in tube corneal touch that required trimming of the tube. One eye developed tube retraction, which was treated with a tube extender. The mean follow-up was 17.9 ± 9.3 (6.2-35.4) months. Conclusion: Managing RPCG remains a challenge. AGV implant was successful in a significant proportion of cases. PMID:25624676

  2. Clinical Outcomes of FP-7/8 Ahmed Glaucoma Valves in the Management of Refractory Glaucoma in the Mainland Chinese Population

    PubMed Central

    Yang, Xuejiao; Deng, Shuifeng; Li, Zuohong; Li, Fei; Zhuo, Yehong

    2015-01-01

    Background To evaluate the efficacy and safety of the Ahmed glaucoma valve (AGV) and the risk factors associated with AGV implantation failure in a population of Chinese patients with refractory glaucoma. Method In total, 79 eyes with refractory glaucoma from 79 patients treated in our institution from November 2007 to November 2010 were enrolled in this retrospective study. The demographic data, preoperative and postoperative intraocular pressures (IOPs), best corrected visual acuity (BCVA), number of anti-glaucoma medications used, completed and qualified surgery success rates and postoperative complications were recorded to evaluate the outcomes of AGV implantation. Factors that were associated with implant failure were determined using Cox proportional hazard regression model analysis and multiple linear regression analysis. Principle Findings The average follow-up time was 12.7±5.8 months (mean±SD). We observed a significant reduction in the mean IOP from 39.9±12.6 mm Hg before surgery to 19.3±9.6 mm Hg at the final follow-up. The complete success rate was 59.5%, and the qualified success rate was 83.5%. The number of previous surgeries was negatively correlated with qualified success rate (P<0.05, OR=0.736, 95% CI 0.547-0.99). Patients with previous trabeculectomy were more likely to use multiple anti-glaucoma drugs to control IOP (P<0.01). The primary complication was determined to be a flat anterior chamber (AC). Conclusion AGV implantation was safe and effective for the management of refractory glaucoma. Patients with a greater number of previous surgeries were more likely to experience surgical failure, and patients with previous trabeculectomy were more likely to use multiple anti-glaucoma drugs to control postoperative IOP. PMID:25996991

  3. Needle Revision With 5-fluorouracil for the Treatment of Ahmed Glaucoma Valve Filtering Blebs: 5-Fluoruracil Needling Revision can be a Useful and Safe Tool in the Management of Failing Ahmed Glaucoma Valve Filtering Blebs.

    PubMed

    Quaranta, Luciano; Floriani, Irene; Hollander, Lital; Poli, Davide; Katsanos, Andreas; Konstas, Anastasios G P

    2016-04-01

    To determine the outcome of needling with adjunctive 5-fluorouracil (5-FU) in patients with a failing Ahmed glaucoma valve (AGV) implant, and to identify predictors of long-term intraocular pressure (IOP) control. A prospective observational study was performed on consecutive patients with medically uncontrolled primary open-angle glaucoma (POAG) with AGV encapsulation or fibrosis and inadequate IOP control. Bleb needling with 5-FU injection (0.1 mL of 50 mg/mL) was performed at the slit-lamp. Patients were examined 1 week following the needling, and then at months 1, 3, and 6. Subsequent follow-up visits were scheduled at 6-month intervals for at least 2 years. Needling with 5-FU was repeated no more than twice during the first 3 months of the follow-up. Procedure outcome was determined on the basis of the recorded IOP levels. Thirty-six patients with an encapsulated or fibrotic AGV underwent 67procedures (mean 1.86 ± 0.83). Complete success, defined as IOP ≤ 18 mm Hg without medications, was obtained in 25% at 24 months of observation. The cumulative proportion of cases achieving either qualified (ie, IOP ≤ 18 mm Hg with medications) or complete success at 24 months of observation was 72.2%. In a univariate Cox proportional hazards model, age was the only variable that independently influenced the risk of failing 5-FU needling revision. Fourteen eyes (38.8%) had a documented complication. Needling over the plate of an AGV supplemented with 5-FU is an effective and safe choice in a significant proportion of POAG patients with elevated IOP due to encapsulation or fibrosis.

  4. Long-term clinical outcomes of Ahmed valve implantation in patients with refractory glaucoma

    PubMed Central

    Lee, Chang Kyu; Ma, Kyoung Tak; Hong, Young Jae

    2017-01-01

    Purpose To evaluate the long-term efficacy of intraocular pressure (IOP) reduction and complications of Ahmed Glaucoma Valve (AGV) implantation in patients with refractory glaucoma. Design Retrospective study. Subjects The study involved 302 refractory glaucoma patients who underwent AGV implantation and had a minimum follow-up of 6 months between March 1995 and December 2013. Methods An operation was defined as successful when (1) the postoperative IOP remained between 5 and 21 mmHg and was reduced 30% compared to the baseline IOP with or without medication, (2) there was no loss of light perception or vision-threatening severe complications, and (3) no additional filtering or aqueous drainage surgery was required. Clinical records were reviewed. Main outcome measures IOP, anti-glaucoma medications, and complications Results The mean follow-up period was 62.25 months (range, 6 to 190 months). The cumulative probability of success was 89% at 6 months, 81% at 1 year, 66% at 3 years, 44% at 10 years, and 26% at 15 years. IOP was reduced from a mean of 32.2 ± 10.5 mmHg to 18.6 ± 9.1 mmHg at 1 month, 15.2 ± 7.0 mmHg at 6 months, and 14.2 ± 3.5 mmHg at 15 years. Surgical failures were significantly increased when preoperative IOP was high, and when severe complications occurred after AGV implantation (P < 0.05). Conclusion AGV implantation was successful for IOP control in patients with refractive glaucoma in the long term. However, the success rate of surgery decreased over time. Preoperative high IOP and severe complications related to the operation were significant risk factors for failure. PMID:29095931

  5. Ahmed Glaucoma Valve Implantation for Uveitic Glaucoma Secondary to Behçet Disease.

    PubMed

    Satana, Banu; Yalvac, Ilgaz S; Sungur, Gulten; Eksioglu, Umit; Basarir, Berna; Altan, Cigdem; Duman, Sunay

    2015-01-01

    To evaluate outcomes of patients with uveitic glaucoma secondary to Behçet disease (BD) who underwent Ahmed glaucoma valve (AGV) implantation. A retrospective chart review of 14 eyes of 10 patients with uveitic glaucoma associated with BD who underwent AGV implantation at a tertiary referral center. Treatment success was defined as intraocular pressure (IOP) between 6 and 21 mm Hg with or without antiglaucoma medication, without further additional glaucoma surgery or loss of light perception. The main outcome measures were IOP, best-corrected visual acuity measured with Snellen charts, and number of glaucoma medications. Mean duration of postoperative follow-up was 18.2±6.6 months (range, 6 to 31 mo). Of the 14 eyes, 10 (71.4%) were pseudophakic and 5 (35.7%) had primary AGV implantation without a history of previous glaucoma surgery. At the most recent follow-up visit, 13 of the 14 eyes had an IOP between 6 and 21 mm Hg. Mean IOP was significantly reduced during follow-up, as compared with preoperative values (P≤0.005). The cumulative probability of surgical success rate was 90.9% at 18 months based on Kaplan-Meier survival analysis. The mean number of antiglaucoma medications required to achieve the desired IOP decreased from 3.4±0.5 preoperatively to 1.0±1.1 postoperatively (P≤0.05). Visual acuity loss of >2 lines occurred in 4 eyes (28.5%) due to optic atrophy associated with retinal vasculitis. Temporary hypotony developed during follow-up in 4 eyes (28.5%) at first postoperative week. For the management of uveitic glaucoma associated with BD, AGV implantation is a successful method for glaucoma control but requires additional surgical interventions for high early hypotony rates.

  6. Clinical Outcomes of FP-7/8 Ahmed Glaucoma Valves in the Management of Refractory Glaucoma in the Mainland Chinese Population.

    PubMed

    Zhu, Yingting; Wei, Yantao; Yang, Xuejiao; Deng, Shuifeng; Li, Zuohong; Li, Fei; Zhuo, Yehong

    2015-01-01

    To evaluate the efficacy and safety of the Ahmed glaucoma valve (AGV) and the risk factors associated with AGV implantation failure in a population of Chinese patients with refractory glaucoma. In total, 79 eyes with refractory glaucoma from 79 patients treated in our institution from November 2007 to November 2010 were enrolled in this retrospective study. The demographic data, preoperative and postoperative intraocular pressures (IOPs), best corrected visual acuity (BCVA), number of anti-glaucoma medications used, completed and qualified surgery success rates and postoperative complications were recorded to evaluate the outcomes of AGV implantation. Factors that were associated with implant failure were determined using Cox proportional hazard regression model analysis and multiple linear regression analysis. The average follow-up time was 12.7±5.8 months (mean±SD). We observed a significant reduction in the mean IOP from 39.9±12.6 mm Hg before surgery to 19.3±9.6 mm Hg at the final follow-up. The complete success rate was 59.5%, and the qualified success rate was 83.5%. The number of previous surgeries was negatively correlated with qualified success rate (P<0.05, OR=0.736, 95% CI 0.547-0.99). Patients with previous trabeculectomy were more likely to use multiple anti-glaucoma drugs to control IOP (P<0.01). The primary complication was determined to be a flat anterior chamber (AC). AGV implantation was safe and effective for the management of refractory glaucoma. Patients with a greater number of previous surgeries were more likely to experience surgical failure, and patients with previous trabeculectomy were more likely to use multiple anti-glaucoma drugs to control postoperative IOP.

  7. Photoelectric scanning-based method for positioning omnidirectional automatic guided vehicle

    NASA Astrophysics Data System (ADS)

    Huang, Zhe; Yang, Linghui; Zhang, Yunzhi; Guo, Yin; Ren, Yongjie; Lin, Jiarui; Zhu, Jigui

    2016-03-01

    Automatic guided vehicle (AGV) as a kind of mobile robot has been widely used in many applications. For better adapting to the complex working environment, more and more AGVs are designed to be omnidirectional by being equipped with Mecanum wheels for increasing their flexibility and maneuverability. However, as the AGV with this kind of wheels suffers from the position errors mainly because of the frequent slipping property, how to measure its position accurately in real time is an extremely important issue. Among the ways of achieving it, the photoelectric scanning methodology based on angle measurement is efficient. Hence, we propose a feasible method to ameliorate the positioning process, which mainly integrates four photoelectric receivers and one laser transmitter. To verify the practicality and accuracy, actual experiments and computer simulations have been conducted. In the simulation, the theoretical positioning error is less than 0.28 mm in a 10 m×10 m space. In the actual experiment, the performances about the stability, accuracy, and dynamic capability of this method were inspected. It demonstrates that the system works well and the performance of the position measurement is high enough to fulfill the mainstream tasks.

  8. Superior versus inferior Ahmed glaucoma valve implantation.

    PubMed

    Pakravan, Mohammad; Yazdani, Shahin; Shahabi, Camelia; Yaseri, Mehdi

    2009-02-01

    To compare the efficacy and safety of Ahmed glaucoma valve (AGV) (New World Medical Inc., Rancho Cucamonga, CA) implantation in the superior versus inferior quadrants. Prospective parallel cohort study. A total of 106 eyes of 106 patients with refractory glaucoma. Consecutive patients with refractory glaucoma underwent AGV implantation in the superior or inferior quadrants. Main outcome measures included intraocular pressure (IOP) and rate of complications. Other outcome measures included best corrected visual acuity (BCVA), number of glaucoma medications, and success rate (defined as at least 30% IOP reduction and 50.122). After 1 year, statistically significant but comparable IOP reduction from baseline (P<0.001) was achieved in both groups (47.0%+/-27.2% and 43.0%+/-24.5% reduction for superior and inferior groups, respectively, P = 0.725). The mean number of glaucoma medications was comparable after 1 year (1.3+/-1.2 vs. 1.9+/-0.8 for superior and inferior implants, respectively, P = 0.256). Success rates were also similar at 1 year: 27 eyes (81.8%) versus 20 eyes (95.2%) for superior and inferior implants, respectively (P = 0.227). However, the overall rate of complications, such as implant exposure necessitating removal, cosmetically unappealing appearance, and endophthalmitis, was higher in the inferior group: 12 eyes (25%) versus 3 eyes (5.2%) for superior and inferior groups, respectively, (P = 0.004). Superior and inferior AGV implants have similar intermediate efficacy in terms of IOP reduction, decrease in number of glaucoma medications, and preservation of vision. However, the inferior quadrants entail significantly more complications. It may be prudent to avoid AGV implantation in the inferior quadrants if the superior quadrants have no contraindications to surgery. Proprietary or commercial disclosure may be found after the references.

  9. Comparison of 1-year outcomes after Ahmed glaucoma valve implantation with and without Ologen adjuvant.

    PubMed

    Kim, Tai Jun; Kang, Sohyun; Jeoung, Jin Wook; Kim, Young Kook; Park, Ki Ho

    2018-02-14

    Many studies have investigated the clinical benefits of Ologen for trabeculectomy. However, its benefits for Ahmed glaucoma valve (AGV) implantation have not been investigated as extensively. The aim of this study was to compare the 1-year outcomes of AGV implantation with and without Ologen adjuvant for the treatment of refractory glaucoma. This retrospective study included a total of 20 eyes of 20 glaucoma patients, who were followed for at least 1-year after undergoing AGV implantation. In 12 eyes of 12 patients, conventional AGV (CAGV) surgery was performed, while in 8 eyes of 8 patients, Ologen-augmented AGV (OAGV) implantation was performed. The outcomes were evaluated according to intraocular pressure (IOP) and the number of IOP-lowering medications. Complete success was defined as IOP ≤ 21 mmHg without medications throughout the 1-year follow-up period, and qualified success was defined as IOP ≤ 21 mmHg with or without medications throughout the 1-year follow-up period. The rate of complete success was significantly higher in the OAGV group (50.0%) than in the CAGV group (8.3%) (p = 0.035). There were no significant differences between the two groups in terms of qualified success or incidence of the early hypertensive phase. The IOP changes were similar between the groups within 1-year postoperatively, though the number of IOP-lowering medications was significantly lower in the OAGV group during the early hypertensive phase (p = 0.031, 0.031, and 0.025 at postoperative months 1, 2, and 3, respectively). When subjects were divided into groups according to the occurrence of the early hypertensive phase, the group with early hypertensive phase was more likely to use IOP-lowering medications at postoperative 6 months and 1 year (p = 0.002 and 0.005, respectively). OAGV surgery shows encouraging results for patients with refractory glaucoma, specifically with respect to the achievement of complete success and the reduction of the number of IOP-lowering medications during the early hypertensive phase. Furthermore, our results suggest that occurrence of the early hypertensive phase is predictive of which patients will require IOP-lowering medications at postoperative 6 months and 1 year.

  10. Efficacy of long scleral tunnel technique in preventing Ahmed glaucoma valve tube exposure through conjunctiva.

    PubMed

    Kugu, Suleyman; Erdogan, Gurkan; Sevim, M Sahin; Ozerturk, Yusuf

    2015-01-01

    To evaluate the efficacy of long scleral tunnel technique used in Ahmed glaucoma valve (AGV) implantation in preventing tube exposure through conjunctiva. Patients of adult age, who were unresponsive to maximum medical treatment and underwent AGV implantation, were divided into two groups and investigated retrospectively. Group 1 consisted of 40 eyes of 38 patients that underwent surgery by long scleral tunnel technique and Group 2 consisted of 38 eyes of 35 patients that underwent implantation by processed pericardium patch graft method. The mean age was 54.8 ± 14.6 years (range 26-68 years) and the mean follow-up duration was 46.7 ± 19.4 months (range 18-76 months) for the patients in Group 1, whereas the mean age was 58.6 ± 16.7 years (range 32-74 years) and mean follow-up period was 43.6 ± 15.7 months (range 20-72 months) for the patients in Group 2 (p > 0.05). In the course of follow-up, tube exposure was detected in one (2.5%) eye in Group 1 and in three (7.9%) eyes in Group 2 (p = 0.042). Long scleral tunnel technique is beneficial in preventing conjunctival tube exposure in AGV implantation surgery.

  11. Initial Experience With the New Ahmed Glaucoma Valve Model M4: Short-term Results.

    PubMed

    Cvintal, Victor; Moster, Marlene R; Shyu, Andrew P; McDermott, Katie; Ekici, Feyzahan; Pro, Michael J; Waisbourd, Michael

    2016-05-01

    To evaluate the clinical outcomes of the new Ahmed glaucoma valve (AGV) model M4. The device consists of a porous polyethylene shell designed for improved tissue integration and reduced encapsulation of the plate for better intraocular pressure (IOP) control. Medical records of patients with an AGV M4 implantation between December 1, 2012 and December 31, 2013 were reviewed. The main outcome measure was surgical failure, defined as either (1) IOP<5 mm Hg or >21 mm Hg and/or <20% reduction of IOP at last follow-up visit, (2) a reoperation for glaucoma, and/or (3) loss of light perception. Seventy-five eyes of 73 patients were included. Postoperative IOP at all follow-up visits significantly decreased from a baseline IOP of 31.2 mm Hg (P<0.01). However, IOP increased significantly at 3 months (20.4 mm Hg), 6 months (19.3 mm Hg), and 12 months (20.3 mm Hg) compared with 1 month (13.8 mm Hg) postoperatively (P<0.05). At 6 months and 1 year, the cumulative probability of failure was 32% and 72%, respectively. The AGV M4 effectively reduced IOP in the first postoperative month, but IOP steadily increased thereafter. Consequently, failure rates were high after 1 year of follow-up.

  12. Use of Autologous Scleral Graft in Ahmed Glaucoma Valve Surgery.

    PubMed

    Wolf, Alvit; Hod, Yair; Buckman, Gila; Stein, Nili; Geyer, Orna

    2016-04-01

    To compare the efficacy of an autoscleral free-flap graft versus an autoscleral rotational flap graft in Ahmed glaucoma valve (AGV) surgery. Medical records (2005 to 2012) of 51 consecutive patients (51 eyes) who underwent AGV surgery with the use of either an autoscleral free-flap graft or an autoscleral rotational flap graft to cover the external tube at the limbus were retrieved for review. The main outcome measure was the incidence of tube exposure associated with each surgical approach. Twenty-seven consecutive patients (27 eyes) received a free-flap graft and 24 consecutive patients (24 eyes) received a rotational flap graft. The mean follow-up time was 55.6 ± 18.3 months for the former and 24.2± 5 .0 months for the latter (P<0.0001). Two patients in the free-flap group (8.9%) developed tube exposure at 24 and 55 months postoperatively compared with none of the patients in the rotational flap group. Graft thinning without evidence of conjunctival erosion was observed in 15 patients (55%) in the free-flap group and in 7 patients (29.1%) in the rotational flap group. The use of an autoscleral rotational flap graft is an efficacious technique for primary tube patch grafting in routine AGV surgery, and yielded better results than an autoscleral free-flap graft. Its main advantages over donor graft material are availability and lower cost.

  13. Excisional Bleb Revision for Management of Failed Ahmed Glaucoma Valve.

    PubMed

    Eslami, Yadollah; Fakhraie, Ghasem; Moghimi, Sasan; Zarei, Reza; Mohammadi, Masoud; Nabavi, Amin; Yaseri, Mehdi; Izadi, Ali

    2017-12-01

    To evaluate the outcome of excisonal bleb revision in patients with failed Ahmed glaucoma valve (AGV). In total, 29 patients with uncontrolled intraocular pressure (IOP) despite of maximal tolerated medical therapy at least 6 months after AGV implantation were enrolled in this prospective interventional case series. Excision of fibrotic tissue around the reservoir with application of mitomycin C 0.02% was performed. IOP, number of glaucoma medications were evaluated at baseline and 1 week and 1, 3, 6, and 12 months postoperatively. Complete and qualified success was defined as IOP≤21 mm Hg with or without glaucoma medications, respectively. Intraoperative and postopervative complications were also recorded. Mean IOP was reduced from 30±4.2 mm Hg at baseline to 19.2±3.1 mm Hg at 12-month follow-up visit (P<0.001). Average number of glaucoma medications was decrease from 3.2±0.5 at baseline to 1.9±0.7 at 12-month follow-up (P<0.001). Qualified and complete success rates at 12-month follow-up were 65.5% and 6.9%, respectively. Younger age and higher number of previous glaucoma surgeries were significantly associated with the failure of excisonal bleb revision. Excisional bleb revision could be considered as a relatively effective alternative option for management of inadequate IOP control after AGV implantation.

  14. A new algorithm for reliable and general NMR resonance assignment.

    PubMed

    Schmidt, Elena; Güntert, Peter

    2012-08-01

    The new FLYA automated resonance assignment algorithm determines NMR chemical shift assignments on the basis of peak lists from any combination of multidimensional through-bond or through-space NMR experiments for proteins. Backbone and side-chain assignments can be determined. All experimental data are used simultaneously, thereby exploiting optimally the redundancy present in the input peak lists and circumventing potential pitfalls of assignment strategies in which results obtained in a given step remain fixed input data for subsequent steps. Instead of prescribing a specific assignment strategy, the FLYA resonance assignment algorithm requires only experimental peak lists and the primary structure of the protein, from which the peaks expected in a given spectrum can be generated by applying a set of rules, defined in a straightforward way by specifying through-bond or through-space magnetization transfer pathways. The algorithm determines the resonance assignment by finding an optimal mapping between the set of expected peaks that are assigned by definition but have unknown positions and the set of measured peaks in the input peak lists that are initially unassigned but have a known position in the spectrum. Using peak lists obtained by purely automated peak picking from the experimental spectra of three proteins, FLYA assigned correctly 96-99% of the backbone and 90-91% of all resonances that could be assigned manually. Systematic studies quantified the impact of various factors on the assignment accuracy, namely the extent of missing real peaks and the amount of additional artifact peaks in the input peak lists, as well as the accuracy of the peak positions. Comparing the resonance assignments from FLYA with those obtained from two other existing algorithms showed that using identical experimental input data these other algorithms yielded significantly (40-142%) more erroneous assignments than FLYA. The FLYA resonance assignment algorithm thus has the reliability and flexibility to replace most manual and semi-automatic assignment procedures for NMR studies of proteins.

  15. Mechatronic description of a laser autoguided vehicle for greenhouse operations.

    PubMed

    Sánchez-Hermosilla, Julián; González, Ramón; Rodríguez, Francisco; Donaire, Julián G

    2013-01-08

    This paper presents a novel approach for guiding mobile robots inside greenhouses demonstrated by promising preliminary physical experiments. It represents a comprehensive attempt to use the successful principles of AGVs (auto-guided vehicles) inside greenhouses, but avoiding the necessity of modifying the crop layout, and avoiding having to bury metallic pipes in the greenhouse floor. The designed vehicle can operate different tools, e.g., a spray system for applying plant-protection product, a lifting platform to reach the top part of the plants to perform pruning and harvesting tasks, and a trailer to transport fruits, plants, and crop waste. Regarding autonomous navigation, it follows the idea of AGVs, but now laser emitters are used to mark the desired route. The vehicle development is analyzed from a mechatronic standpoint (mechanics, electronics, and autonomous control).

  16. Assessment of Filtration Bleb and Endplate Positioning Using Magnetic Resonance Imaging in Eyes Implanted with Long-Tube Glaucoma Drainage Devices.

    PubMed

    Sano, Ichiya; Tanito, Masaki; Uchida, Koji; Katsube, Takashi; Kitagaki, Hajime; Ohira, Akihiro

    2015-01-01

    To evaluate ocular fluid filtration and endplate positioning in glaucomatous eyes with long-tube glaucoma drainage devices (GDDs) using magnetic resonance imaging (MRI) and the effects of various factors on postoperative intraocular pressure (IOP). This observational case series included 27 consecutive glaucomatous eyes (18 men, 7 women; mean age ± standard error, 63.0±2.0 years) who underwent GDD implantation (n = 8 Ahmed Glaucoma Valves [AGV] and n = 19 Baerveldt Glaucoma Implants [BGI]). Tubes were inserted into the pars plana in 23 eyes and anterior chamber in 4 eyes. Six months postoperatively, high-resolution orbital images were obtained using 3-Tesla MRI with head-array coils, and the filtering bleb volume, bleb height, and distances between the anterior endplate edge and corneal center or limbus or between the endplate and orbital wall were measured. In MR images obtained by three-dimensional fast imaging employing steady-state acquisition (3D-FIESTA) sequences, the shunt endplate was identified as low-intensity signal, and the filtering bleb was identified as high-intensity signals above and below the endplate in all eyes. The 6-month-postoperative IOP level was correlated negatively with bleb volume (r = -0.4510, P = 0.0182) and bleb height (r = -0.3954, P = 0.0412). The postoperative IOP was significantly (P = 0.0026) lower in BGI-implanted eyes (12.2±0.7 mmHg) than AGV-implanted eyes (16.7±1.2 mmHg); bleb volume was significantly (P = 0.0093) larger in BGI-implanted eyes (478.8±84.2 mm3) than AGV-implanted eyes (161.1±52.3 mm3). Other parameters did not differ. The presence of intraorbital/periocular accumulation of ocular fluid affects postoperative IOP levels in eyes implanted with long-tube GDDs. Larger filtering blebs after BGI than AGI implantations explain lower postoperative IOP levels achieved with BGI than AGV. The findings will contribute to better understanding of IOP reducing mechanism of long-tube GDDs.

  17. Telematic Problems of Unmanned Vehicles Positioning at Container Terminals and Warehouses

    NASA Astrophysics Data System (ADS)

    Kwasniowski, Stanisław; Zajac, Mateusz; Zajac, Paweł

    This paper describes the issues of transshipment container terminals operations, in the light of the development of this kind of transport. An increase in handling requires an expansion of stacking yard and automation of handling and transport processes. The development in this area first and foremost depends on modern handling technologies and automatic identification systems. AGV trucks play a key role in in those systems. The role of universities is to promote innovative technologies. Paper [2] contains the status of intermodal terminals development in Poland, which was awarded the prize of the Minister of Infrastructure of Poland in the field of "organization and management." The paper contains a detailed description of the principles of positioning, control and propulsion of AGV vehicles. The content was developed to make it understandable to logisticians responsible for the implementation question in Poland.

  18. Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH.

    PubMed

    Volk, Jochen; Herrmann, Torsten; Wüthrich, Kurt

    2008-07-01

    MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness.

  19. Protein Side-Chain Resonance Assignment and NOE Assignment Using RDC-Defined Backbones without TOCSY Data3

    PubMed Central

    Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall

    2011-01-01

    One bottleneck in NMR structure determination lies in the laborious and time-consuming process of side-chain resonance and NOE assignments. Compared to the well-studied backbone resonance assignment problem, automated side-chain resonance and NOE assignments are relatively less explored. Most NOE assignment algorithms require nearly complete side-chain resonance assignments from a series of through-bond experiments such as HCCH-TOCSY or HCCCONH. Unfortunately, these TOCSY experiments perform poorly on large proteins. To overcome this deficiency, we present a novel algorithm, called NASCA (NOE Assignment and Side-Chain Assignment), to automate both side-chain resonance and NOE assignments and to perform high-resolution protein structure determination in the absence of any explicit through-bond experiment to facilitate side-chain resonance assignment, such as HCCH-TOCSY. After casting the assignment problem into a Markov Random Field (MRF), NASCA extends and applies combinatorial protein design algorithms to compute optimal assignments that best interpret the NMR data. The MRF captures the contact map information of the protein derived from NOESY spectra, exploits the backbone structural information determined by RDCs, and considers all possible side-chain rotamers. The complexity of the combinatorial search is reduced by using a dead-end elimination (DEE) algorithm, which prunes side-chain resonance assignments that are provably not part of the optimal solution. Then an A* search algorithm is employed to find a set of optimal side-chain resonance assignments that best fit the NMR data. These side-chain resonance assignments are then used to resolve the NOE assignment ambiguity and compute high-resolution protein structures. Tests on five proteins show that NASCA assigns resonances for more than 90% of side-chain protons, and achieves about 80% correct assignments. The final structures computed using the NOE distance restraints assigned by NASCA have backbone RMSD 0.8 – 1.5 Å from the reference structures determined by traditional NMR approaches. PMID:21706248

  20. New optimization model for routing and spectrum assignment with nodes insecurity

    NASA Astrophysics Data System (ADS)

    Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli

    2017-04-01

    By adopting the orthogonal frequency division multiplexing technology, elastic optical networks can provide the flexible and variable bandwidth allocation to each connection request and get higher spectrum utilization. The routing and spectrum assignment problem in elastic optical network is a well-known NP-hard problem. In addition, information security has received worldwide attention. We combine these two problems to investigate the routing and spectrum assignment problem with the guaranteed security in elastic optical network, and establish a new optimization model to minimize the maximum index of the used frequency slots, which is used to determine an optimal routing and spectrum assignment schemes. To solve the model effectively, a hybrid genetic algorithm framework integrating a heuristic algorithm into a genetic algorithm is proposed. The heuristic algorithm is first used to sort the connection requests and then the genetic algorithm is designed to look for an optimal routing and spectrum assignment scheme. In the genetic algorithm, tailor-made crossover, mutation and local search operators are designed. Moreover, simulation experiments are conducted with three heuristic strategies, and the experimental results indicate that the effectiveness of the proposed model and algorithm framework.

  1. The Results of the Use of Ahmed Valve in Refractory Glaucoma Surgery

    PubMed Central

    Bikbov, Mukharram Mukhtaramovich

    2015-01-01

    ABSTRACT The treatment of refractory glaucoma (RG) is challenging. The commonly adopted strategy in RG treatment is a glaucoma drainage device (GDD) implantation, which despite its radical nature may not always provide the desired intraocular pressure (IOP) levels for a long term. This review is based on the scientific literature on Ahmed glaucoma valve (AGV) implantation for refractory glaucoma. The technique of AGV implantation is described and data for both the types, FP7 and FP8 performance are presented. The outcome with adjunct antimetabolite and anti-VEGF drugs are also highlighted. An insight is given about experimental and histological examinations of the filtering bleb encapsulation. The article also describes various complications and measures to prevent them. How to cite this article: Bikbov MM, Khusnitdinov II. The Results of the Use of Ahmed Valve in Refractory Glaucoma Surgery. J Curr Glaucoma Pract 2015;9(3):86-91. PMID:26997843

  2. PhosSA: Fast and accurate phosphorylation site assignment algorithm for mass spectrometry data.

    PubMed

    Saeed, Fahad; Pisitkun, Trairak; Hoffert, Jason D; Rashidian, Sara; Wang, Guanghui; Gucek, Marjan; Knepper, Mark A

    2013-11-07

    Phosphorylation site assignment of high throughput tandem mass spectrometry (LC-MS/MS) data is one of the most common and critical aspects of phosphoproteomics. Correctly assigning phosphorylated residues helps us understand their biological significance. The design of common search algorithms (such as Sequest, Mascot etc.) do not incorporate site assignment; therefore additional algorithms are essential to assign phosphorylation sites for mass spectrometry data. The main contribution of this study is the design and implementation of a linear time and space dynamic programming strategy for phosphorylation site assignment referred to as PhosSA. The proposed algorithm uses summation of peak intensities associated with theoretical spectra as an objective function. Quality control of the assigned sites is achieved using a post-processing redundancy criteria that indicates the signal-to-noise ratio properties of the fragmented spectra. The quality assessment of the algorithm was determined using experimentally generated data sets using synthetic peptides for which phosphorylation sites were known. We report that PhosSA was able to achieve a high degree of accuracy and sensitivity with all the experimentally generated mass spectrometry data sets. The implemented algorithm is shown to be extremely fast and scalable with increasing number of spectra (we report up to 0.5 million spectra/hour on a moderate workstation). The algorithm is designed to accept results from both Sequest and Mascot search engines. An executable is freely available at http://helixweb.nih.gov/ESBL/PhosSA/ for academic research purposes.

  3. Outcomes of using a sutureless bovine pericardial patch graft for Ahmed glaucoma valve implantation.

    PubMed

    Quaranta, Luciano; Riva, Ivano; Floriani, Irene C

    2013-01-01

    To evaluate the long-term outcomes of a surgical technique using a sutureless bovine pericardial patch graft for the implantation of an Ahmed glaucoma valve (AGV). 
 This was a pilot study on patients with primary open-angle glaucoma refractory to repeated surgical filtering procedures. All patients underwent AGV implant technique using a sutureless bovine pericardial patch graft. The pericardial membrane was cut using an ordinary corneal trephine with a diameter of 9.0 or 10.0 mm. The anterior part of the tube was covered with the graft and kept in place with fibrin glue. Subsequently, the cap was stitched all around the tube and the dissected conjunctiva was laid over it. Intraocular pressure (IOP) and complications were evaluated 1 week and 1, 3, 6, 12, and 24 months after surgery.
 The procedure was used to treat 20 eyes of 20 consecutive patients (12 men and 8 women: mean age [SD] 64.8 [7.8] years). Mean IOP was 28.1 mm Hg (SD 4.9) at baseline and decreased to 14.9 mm Hg (SD 1.5) 24 months after surgery (p<0.001). The overall mean number of topical medications was 3.1 (SD 0.5) at baseline and decreased to 1.4 (SD 0.8) after 24 months (p<0.001). During follow-up, there was no conjunctival erosion, thinning of pericardial patch graft over the tube, or tube exposure; no signs of endophthalmitis were recorded.
 The results suggest that the sutureless technique using a bovine pericardial graft patch is a safe and rapid procedure for AGV implantation.

  4. Sutureless human sclera donor patch graft for Ahmed glaucoma valve.

    PubMed

    Zeppa, Lucio; Romano, Mario R; Capasso, Luigi; Tortori, Achille; Majorana, Mara A; Costagliola, Ciro

    2010-01-01

    To report the safety and effectiveness of a sutureless human sclera donor patch graft covering the subconjunctival portion of glaucoma drainage implant tube to prevent its erosion throughout the overlying conjunctiva. This was a prospective pilot study. Fifteen eyes of 15 consecutive patients not responsive to medical and to not-implant surgical glaucoma treatment underwent Ahmed glaucoma valve (AGV) implant surgery with sutureless human sclera donor patch graft. The surgical procedure included AVG implant placed 8 mm behind the corneal limbus and fixed to the sclera with two 9-0 black nylon sutures. The tube was passed through the scleral tunnel, parallel to the corneal limbus, and shortened at the desired length. The anterior part of the tube was covered with human donor scleral graft and kept in place with fibrin glue (Tissue Coll) under the conjunctiva. Examinations were scheduled at baseline and then at 1 week and 1, 3, 6, and 12 months after surgery. At 12-month follow-up, the best-corrected visual acuity did not significantly improve from baseline 0.78+/-1.2 logMAR, whereas mean intraocular pressure significantly decreased from preoperative values of 29.8 (SD 8.4) mmHg. In all cases, the scleral patch was found in place at each check during the follow-up period. No conjunctival erosion over the AGV tube nor sign of endophthalmitis was recorded at any time during the follow-up period. AVG implant surgery with sutureless human sclera donor patch graft represents an effective and relatively safe surgical procedure for complicated glaucomas, avoiding conjunctival erosions over the AGV tube.

  5. Evaluation of Ahmed glaucoma valve implantation through a needle-generated scleral tunnel in Mexican children with glaucoma.

    PubMed

    Albis-Donado, Oscar; Gil-Carrasco, Félix; Romero-Quijada, Rafael; Thomas, Ravi

    2010-01-01

    To evaluate the results and extrusion rates of the Ahmed glaucoma valve (AGV) implantation through a needle-generated scleral tunnel, without a tube-covering patch, in children. A retrospective review of the charts of 106 Mexican children implanted with 128 AGVs operated between 1994 and 2002, with the needle track technique, at our institution, with at least six months follow up was done. Main outcome measures were intraocular pressure (IOP) control, tube extrusions or exposure and other complications. Kaplan-Meier analysis demonstrated a 96.9% survival rate at six months, 82.4% at one year, 78.7% at two years, 70% at three years and 41.6% at four years. Total success at the last follow-up (IOP between 6 and 21 mm Hg without medications) was achieved in 30 eyes (23.5%), 58 eyes (45.3%) had qualified success (only topical hypotensive drugs) and 40 eyes (31.3%) were failures. The mean pre- and post-operative IOP at the last follow up was 28.4 mmHg (SD 9.3) and 14.5 mmHg (SD 6.3), respectively. No tube extrusions or exposures were observed. Tube-related complications included five retractions, a lens touch and a transitory endothelial touch. The risk of failure increased if the eye had any complication or previous glaucoma surgeries. Medium-term IOP control in Mexican children with glaucoma can be achieved with AGV implantation using a needle-generated tunnel, without constructing a scleral flap or using a patch to cover the tube. There were no tube extrusions, nor any tube exposures with this technique.

  6. Serial intracameral visualization of the Ahmed glaucoma valve tube by anterior segment optical coherence tomography.

    PubMed

    Lopilly Park, H-Y; Jung, K I; Park, C K

    2012-09-01

    To investigate serial changes of the Ahmed glaucoma valve (AGV) implant tube in the anterior chamber by anterior segment optical coherence tomography (AS-OCT). Patients who had received AGV implantation without complications (n=48) were included in this study. Each patient received follow-up examinations including AS-OCT at days 1 and 2, week 1, and months 1, 3, 6, and 12. Tube parameters were defined to measure its length and position. The intracameral length of the tube was from the tip of the bevel-edged tube to the sclerolimbal junction. The distance between the extremity of the tube and the anterior iris surface (T-I distance), and the angle between the tube and the posterior endothelial surface of the cornea (T-C angle) were defined. Factors that were related to tube parameters were analysed by multiple regression analysis. The mean change in tube length was -0.20 ± 0.17 mm, indicating that the tube length shortened from the initial inserted length. The mean T-I distance change was 0.11 ± 0.07 mm and the mean T-C angle change was -6.7 ± 5.6°. Uveitic glaucoma and glaucoma following penetrating keratoplasty showed the most changes in tube parameters. By multiple regression analysis, diagnosis of glaucoma including uveitic glaucoma (P=0.049) and glaucoma following penetrating keratoplasty (P=0.008) were related to the change of intracameral tube length. These results suggest that the length and position of the AGV tube changes after surgery. The change was prominent in uveitic glaucoma and glaucoma following penetrating keratoplasty.

  7. An Algorithm for Protein Helix Assignment Using Helix Geometry

    PubMed Central

    Cao, Chen; Xu, Shutan; Wang, Lincong

    2015-01-01

    Helices are one of the most common and were among the earliest recognized secondary structure elements in proteins. The assignment of helices in a protein underlies the analysis of its structure and function. Though the mathematical expression for a helical curve is simple, no previous assignment programs have used a genuine helical curve as a model for helix assignment. In this paper we present a two-step assignment algorithm. The first step searches for a series of bona fide helical curves each one best fits the coordinates of four successive backbone Cα atoms. The second step uses the best fit helical curves as input to make helix assignment. The application to the protein structures in the PDB (protein data bank) proves that the algorithm is able to assign accurately not only regular α-helix but also 310 and π helices as well as their left-handed versions. One salient feature of the algorithm is that the assigned helices are structurally more uniform than those by the previous programs. The structural uniformity should be useful for protein structure classification and prediction while the accurate assignment of a helix to a particular type underlies structure-function relationship in proteins. PMID:26132394

  8. Method for concurrent execution of primitive operations by dynamically assigning operations based upon computational marked graph and availability of data

    NASA Technical Reports Server (NTRS)

    Mielke, Roland V. (Inventor); Stoughton, John W. (Inventor)

    1990-01-01

    Computationally complex primitive operations of an algorithm are executed concurrently in a plurality of functional units under the control of an assignment manager. The algorithm is preferably defined as a computationally marked graph contianing data status edges (paths) corresponding to each of the data flow edges. The assignment manager assigns primitive operations to the functional units and monitors completion of the primitive operations to determine data availability using the computational marked graph of the algorithm. All data accessing of the primitive operations is performed by the functional units independently of the assignment manager.

  9. Ontological Problem-Solving Framework for Assigning Sensor Systems and Algorithms to High-Level Missions

    PubMed Central

    Qualls, Joseph; Russomanno, David J.

    2011-01-01

    The lack of knowledge models to represent sensor systems, algorithms, and missions makes opportunistically discovering a synthesis of systems and algorithms that can satisfy high-level mission specifications impractical. A novel ontological problem-solving framework has been designed that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. PMID:22164081

  10. A New Secondary Structure Assignment Algorithm Using Cα Backbone Fragments

    PubMed Central

    Cao, Chen; Wang, Guishen; Liu, An; Xu, Shutan; Wang, Lincong; Zou, Shuxue

    2016-01-01

    The assignment of secondary structure elements in proteins is a key step in the analysis of their structures and functions. We have developed an algorithm, SACF (secondary structure assignment based on Cα fragments), for secondary structure element (SSE) assignment based on the alignment of Cα backbone fragments with central poses derived by clustering known SSE fragments. The assignment algorithm consists of three steps: First, the outlier fragments on known SSEs are detected. Next, the remaining fragments are clustered to obtain the central fragments for each cluster. Finally, the central fragments are used as a template to make assignments. Following a large-scale comparison of 11 secondary structure assignment methods, SACF, KAKSI and PROSS are found to have similar agreement with DSSP, while PCASSO agrees with DSSP best. SACF and PCASSO show preference to reducing residues in N and C cap regions, whereas KAKSI, P-SEA and SEGNO tend to add residues to the terminals when DSSP assignment is taken as standard. Moreover, our algorithm is able to assign subtle helices (310-helix, π-helix and left-handed helix) and make uniform assignments, as well as to detect rare SSEs in β-sheets or long helices as outlier fragments from other programs. The structural uniformity should be useful for protein structure classification and prediction, while outlier fragments underlie the structure–function relationship. PMID:26978354

  11. Immunological Effect of aGV Rabies Vaccine Administered Using the Essen and Zagreb Regimens: A Double-Blind, Randomized Clinical Trial.

    PubMed

    Miao, Li; Shi, Liwei; Yang, Yi; Yan, Kunming; Sun, Hongliang; Mo, Zhaojun; Li, Li

    2018-04-01

    This study evaluated the immunological effect of an aGV rabies virus strain using the Essen and Zagreb immunization programs. A total of 1,944 subjects were enrolled and divided into three groups: the Essen test group, Essen control group, and Zagreb test group. Neutralizing antibody levels and antibody seroconversion rates were determined at 7 and 14 days after the initial inoculations and then 14 days after the final inoculation in all of the subjects. The seroconversion rates for the Essen test group, Essen control group, and Zagreb test group, which were assessed 7 days after the first dosing in a susceptible population, were 35.74%, 26.92%, and 45.49%, respectively, and at 14 days, the seroconversion rates in this population were 100%, 100%, and 99.63%, respectively. At 14 days after the final dosing, the seroconversion rates were 100% in all three of the groups. The neutralizing serum antibody levels of the Essen test group, Essen control group, and Zagreb test group at 7 days after the first dosing in the susceptible population were 0.37, 0.26, and 0.56 IU/mL, respectively, and at 14 days after the initial dosing, these levels were 16.71, 13.85, and 16.80 IU/mL. At 14 days after the final dosing, the neutralizing antibody levels were 22.9, 16.3, and 18.62 IU/mL, respectively. The results of this study suggested that the aGV rabies vaccine using the Essen program resulted in a good serum immune response, and the seroconversion rates and the neutralizing antibody levels generated with the Zagreb regimen were higher than those with the Essen regimen when measured 7 days after the first dose.

  12. Evaluation of Ahmed glaucoma valve implantation through a needle-generated scleral tunnel in Mexican children with glaucoma

    PubMed Central

    Albis-Donado, Oscar; Gil-Carrasco, Félix; Romero-Quijada, Rafael; Thomas, Ravi

    2010-01-01

    Purpose: To evaluate the results and extrusion rates of the Ahmed glaucoma valve (AGV) implantation through a needle-generated scleral tunnel, without a tube-covering patch, in children. Materials and Methods: A retrospective review of the charts of 106 Mexican children implanted with 128 AGVs operated between 1994 and 2002, with the needle track technique, at our institution, with at least six months follow up was done. Main outcome measures were intraocular pressure (IOP) control, tube extrusions or exposure and other complications. Results: Kaplan-Meier analysis demonstrated a 96.9% survival rate at six months, 82.4% at one year, 78.7% at two years, 70% at three years and 41.6% at four years. Total success at the last follow-up (IOP between 6 and 21 mm Hg without medications) was achieved in 30 eyes (23.5%), 58 eyes (45.3%) had qualified success (only topical hypotensive drugs) and 40 eyes (31.3%) were failures. The mean pre- and post-operative IOP at the last follow up was 28.4 mmHg (SD 9.3) and 14.5 mmHg (SD 6.3), respectively. No tube extrusions or exposures were observed. Tube-related complications included five retractions, a lens touch and a transitory endothelial touch. The risk of failure increased if the eye had any complication or previous glaucoma surgeries. Conclusion: Medium-term IOP control in Mexican children with glaucoma can be achieved with AGV implantation using a needle-generated tunnel, without constructing a scleral flap or using a patch to cover the tube. There were no tube extrusions, nor any tube exposures with this technique. PMID:20689189

  13. Assessment of conditions affecting surgical success of Ahmed glaucoma valve implants in glaucoma secondary to different uveitis etiologies in adults.

    PubMed

    Sungur, G; Yakin, M; Eksioglu, U; Satana, B; Ornek, F

    2017-10-01

    PurposeThere is little known about the long-term efficacy and safety of Ahmed glaucoma valve (AGV) implant and about the conditions affecting surgical success in uveitic glaucoma (UG).Patients and methodsThe charts of adult patients with UG who underwent AGV implantation from 2006 to 2015 were reviewed retrospectively.ResultsData of 46 eyes of 39 patients were evaluated. Mean follow-up was 51.93±23.08 months. Mean preoperative IOP was 37.05±9.62 mm Hg and mean number of preoperative topical anti-glaucomatous medications was 2.98±0.27. One eye (2%) was defined as failure because of implant extraction surgery. In the rest of the eyes, intraocular pressure (IOP) was under control with or without anti-glaucomatous medications during follow-up. The cumulative probability of complete success (IOP control without medications) was 78% at 6 months, 76% at 1 year, 71% at 2 years, 66% at 3 years, and 63% at 4 years (95% confidence interval, 61.24-87.81). The cumulative probability of eyes without complication was 64% at 6 months, 48% at 12 months, 44% at 24 months, 41% at 36 months, and 38% at 48 months (95% confidence interval, 34.64-62.85). Complete success was lower in eyes with previous ocular surgery than the eyes without (P=0.061) and it was lower in eyes with active inflammation at the time of surgery than the eyes without (P=0.011).ConclusionAGV implantation is an effective and safe alternative method in the management of UG, especially when it is performed as a primary surgical option and when no inflammation is present preoperatively.

  14. Ahmed glaucoma valve implantation with tube insertion through the ciliary sulcus in pseudophakic/aphakic eyes.

    PubMed

    Eslami, Yadolla; Mohammadi, Massood; Fakhraie, Ghasem; Zarei, Reza; Moghimi, Sasan

    2014-02-01

    To report the efficacy and safety of Ahmed glaucoma valve (AGV) insertion into the ciliary sulcus in pseudophakic/aphakic patients. A chart review was done on patients with uncontrolled glaucoma, who underwent AGV implantation with tube inserted into the ciliary sulcus. Baseline intraocular pressure (IOP) and number of medications were compared with that of postoperative follow-up visits. Surgical success was defined as last IOP <21 mm Hg and 20% reduction in IOP, without further surgery for complications or glaucoma control, and without loss of light perception. Postoperative complications were recorded. Twenty-three eyes of 23 patients were recruited with the mean follow-up of 9 months (range, 3 to 24 mo). The mean (SD) age of patients was 49.9 (16.9) years (range, 22 to 80 years). The mean (SD) IOP (mm Hg) was reduced from 37.9 (12.4) before surgery to 16.2 (3.6) at the last follow-up visit (P<0.001). The mean (SD) number of medications was reduced from 3.3 (0.9) preoperatively to 1 (1.1) at the last follow-up (P<0.001). Success rate was 18/23 (78.6%). Complications included endophthalmitis in 1 eye, tube exposure in 1 diabetic patient, and vitreous tube occlusion in 1 eye. No case of corneal decompensation or graft failure was seen during follow-up. Ciliary sulcus placement of the tube of AGV effectively reduces IOP and medication use in short term. It has the potential to lower corneal complications of anterior chamber tube insertion and avoids the need for pars plana vitrectomy and tube insertion in patients at higher risk of corneal decompensation.

  15. Clinical outcomes of Ahmed glaucoma valve in anterior chamber versus ciliary sulcus.

    PubMed

    Bayer, A; Önol, M

    2017-04-01

    PurposeTo evaluate the outcomes of Ahmed glaucoma valve (AGV) tube insertion through the anterior chamber angle (ACA) or through the ciliary sulcus (CS).Patients and methodsIn this case-control study, we retrospectively reviewed the charts of consecutive glaucoma patients who had undergone AGV implantation either through the ACA or the CS between March 2009 and December 2014. The main outcome measures were intraocular pressure (IOP), number of glaucoma medications prescribed, best corrected visual acuity (BCVA), glaucoma type, success rate, complications, and survival ratios. Statistical analysis was carried out using SPSS.ResultsThere were 68 eyes in the ACA group and 35 eyes in the CS group. There were no significant differences between the groups for age, sex, laterality, IOP, preoperative glaucoma medication number, BCVA or glaucoma type (P>0.05). The postoperative follow-up period was 27.2±16.5 months and 30.2±17.7 months for the ACA and the CS groups (P=0.28); IOP values were significantly reduced at the last visit to 16.4±7.2 mm Hg and 14.4±6.8 mm Hg. The difference in the last-visit IOP between the groups was not significant (P=0.06), but the IOP reduction ratio was higher in the CS group (P=0.03). There was no significant difference in the number of postoperative medications (P=0.18). Postoperative complications were similar, but the incidence of flat anterior chamber was higher in the ACA group (P=0.05).ConclusionsThe use of an AGV can control IOP in the majority of cases whether placed in the ACA or the CS. The IOP reduction ratio seemed to be higher in the CS group.

  16. Long-term Outcomes of Ahmed Glaucoma Valve Implantation in Refractory Glaucoma at Farabi Eye Hospital, Tehran, Iran

    PubMed Central

    Zarei, Reza; Amini, Heidar; Daneshvar, Ramin; Nabi, Fahimeh Naderi; Moghimi, Sasan; Fakhraee, Ghasem; Eslami, Yadollah; Mohammadi, Masoud; Amini, Nima

    2016-01-01

    Purpose: To describe long-term outcomes and complications of Ahmed glaucoma valve (AGV) implantation in subjects with refractory glaucoma at Farabi Eye Hospital, Tehran, Iran. Materials and Methods: This retrospective cohort study evaluated patient records of all subjects with refractory glaucoma who had undergone AGV implantation up to January 2013. The main outcome measure was the surgical success rate. Complete success was defined as intraocular pressure (IOP) <22 mmHg, without anti-glaucoma medications or additional surgery. Qualified success was IOP <22 mmHg regardless of number of anti-glaucoma medications. In all cases, loss of vision (no light perception) was considered an independent indicator of failure. Data were also collected on intraoperative and postoperative complications. Results: Twenty-eight eyes were included in the study. With a mean follow-up of 48.2 ± 31.7 months (median: 40.50 months; range: 3–124 months), the IOP decreased from a mean preoperative value of 30.8 ± 5.6 mmHg to 20.0 ± 6.4 mmHg at last visit. The number of medications decreased from 3.7 ± 0.4 preoperatively to 2.5 ± 1.1 postoperatively. Cumulative qualified success was achieved in 69% of eyes. Mean time to failure according to qualified success criteria was 92.3 ± 9.4 months. Postoperative complications were recorded in 16 (57.1%) eyes. The most common complication was focal endothelial corneal decompensation at the site of tube-cornea touch. Conclusion: AGV implantation with adjunctive topical anti-glaucoma drops controlled IOP in approximately 70% of eyes with refractory glaucoma with a median of 40.5 months of follow-up. However, complication rates were higher. PMID:26957848

  17. Clinical outcomes of Ahmed glaucoma valve in anterior chamber versus ciliary sulcus

    PubMed Central

    Bayer, A; Önol, M

    2017-01-01

    Purpose To evaluate the outcomes of Ahmed glaucoma valve (AGV) tube insertion through the anterior chamber angle (ACA) or through the ciliary sulcus (CS). Patients and methods In this case-control study, we retrospectively reviewed the charts of consecutive glaucoma patients who had undergone AGV implantation either through the ACA or the CS between March 2009 and December 2014. The main outcome measures were intraocular pressure (IOP), number of glaucoma medications prescribed, best corrected visual acuity (BCVA), glaucoma type, success rate, complications, and survival ratios. Statistical analysis was carried out using SPSS. Results There were 68 eyes in the ACA group and 35 eyes in the CS group. There were no significant differences between the groups for age, sex, laterality, IOP, preoperative glaucoma medication number, BCVA or glaucoma type (P>0.05). The postoperative follow-up period was 27.2±16.5 months and 30.2±17.7 months for the ACA and the CS groups (P=0.28); IOP values were significantly reduced at the last visit to 16.4±7.2 mm Hg and 14.4±6.8 mm Hg. The difference in the last-visit IOP between the groups was not significant (P=0.06), but the IOP reduction ratio was higher in the CS group (P=0.03). There was no significant difference in the number of postoperative medications (P=0.18). Postoperative complications were similar, but the incidence of flat anterior chamber was higher in the ACA group (P=0.05). Conclusions The use of an AGV can control IOP in the majority of cases whether placed in the ACA or the CS. The IOP reduction ratio seemed to be higher in the CS group. PMID:27983734

  18. Short term outcome of Ahmed glaucoma valve implantation in management of refractory glaucoma in a tertiary hospital in Oman

    PubMed Central

    Shah, Manali R.; Khandekar, Rajiv B.; Zutshi, Rajiv; Mahrooqi, Rahima

    2013-01-01

    Background: We present outcomes of Ahmed Glaucoma Valve (AGV) implantation in treating refractory glaucoma in a tertiary hospital in Oman. Refractory glaucoma was defined as previously failed conventional glaucoma surgery and an uncontrolled intraocular pressure (IOP) of more than 21 mm Hg despite treatment with three topical and/or oral therapy. Materials and Methods: This historical cohort study was conducted in 2010. Details of medical and surgical treatment were recorded. Ophthalmologists examined eyes and performed glaucoma surgeries using AGV. The best corrected distant vision, IOP, and glaucoma medications were prospectively reviewed on 1st day, 1st, 6th, 12th week postoperatively, and at the last follow up. Result: Glaucoma specialists examined and treated 40 eyes with refractory glaucoma of 39 patients (20 males + 19 females). Neo-vascular glaucoma was present in 23 eyes. Vision before surgery was <3/60 in 21 eyes. At 12 weeks, one eye had vision better than 6/12, seven eyes had vision 6/18 to 6/60, and eight eyes had vision 6/60 to 3/60. Mean IOP was reduced from 42.9 (SD 16) to 14.2 (SD 8) and 19.1 (SD 7.8) mmHg at one and 12 weeks after surgery, respectively. At 12 weeks, five (12.5%) eyes had IOP controlled without medication. In 33 (77.5%) eyes, pressure was controlled by using one or two eye drops. The mean number of preoperative anti-glaucoma medications (2.38; SD 1.1) was reduced compared to the mean number of postoperative medications (1.92; SD 0.9) at 12 weeks. Conclusion: We succeeded in reducing visual disabilities and the number of anti-glaucoma medications used to treat refractory glaucoma by AGV surgery. PMID:23772122

  19. Long-term Outcomes of Ahmed Glaucoma Valve Implantation in Refractory Glaucoma at Farabi Eye Hospital, Tehran, Iran.

    PubMed

    Zarei, Reza; Amini, Heidar; Daneshvar, Ramin; Nabi, Fahimeh Naderi; Moghimi, Sasan; Fakhraee, Ghasem; Eslami, Yadollah; Mohammadi, Masoud; Amini, Nima

    2016-01-01

    To describe long-term outcomes and complications of Ahmed glaucoma valve (AGV) implantation in subjects with refractory glaucoma at Farabi Eye Hospital, Tehran, Iran. This retrospective cohort study evaluated patient records of all subjects with refractory glaucoma who had undergone AGV implantation up to January 2013. The main outcome measure was the surgical success rate. Complete success was defined as intraocular pressure (IOP) <22 mmHg, without anti-glaucoma medications or additional surgery. Qualified success was IOP <22 mmHg regardless of number of anti-glaucoma medications. In all cases, loss of vision (no light perception) was considered an independent indicator of failure. Data were also collected on intraoperative and postoperative complications. Twenty-eight eyes were included in the study. With a mean follow-up of 48.2 ± 31.7 months (median: 40.50 months; range: 3-124 months), the IOP decreased from a mean preoperative value of 30.8 ± 5.6 mmHg to 20.0 ± 6.4 mmHg at last visit. The number of medications decreased from 3.7 ± 0.4 preoperatively to 2.5 ± 1.1 postoperatively. Cumulative qualified success was achieved in 69% of eyes. Mean time to failure according to qualified success criteria was 92.3 ± 9.4 months. Postoperative complications were recorded in 16 (57.1%) eyes. The most common complication was focal endothelial corneal decompensation at the site of tube-cornea touch. AGV implantation with adjunctive topical anti-glaucoma drops controlled IOP in approximately 70% of eyes with refractory glaucoma with a median of 40.5 months of follow-up. However, complication rates were higher.

  20. Clinical Outcomes of Ahmed Glaucoma Valve Implantation Using Tube Ligation and Removable External Stents

    PubMed Central

    Lee, Jong Joo; Kim, Dong Myung; Kim, Tae Woo

    2009-01-01

    Purpose To investigate the immediate and long-term outcomes of Ahmed glaucoma valve (AGV) implantation with silicone tube ligation and removable external stents. Methods This retrospective non-comparative study investigated the outcomes of AGV implantation with silicone tube ligation and removable external stents in 95 eyes (90 patients) with at least 12 months of postoperative follow-up. Qualified success was defined as an intraocular pressure (IOP) of ≤21 mmHg and ≥6 mmHg regardless of anti-glaucoma medication. Those who required additional glaucoma surgery, implant removal or who had phthisis bulbi were considered failures. Hypotony was defined as an IOP of <6 mmHg. Results Mean IOP reduced from 37.1±9.7 mmHg preoperatively to 15.2±5.6 mmHg at 12 months postoperatively (p<0.001). Qualified success was achieved in 84.2% at 1 year. Hypotony with an IOP of <6 mmHg was seen in 8.4% and an IOP of <5 mmHg in 3.2% on the first postoperative day. No case of hypotony required surgical intervention. Suprachoroidal hemorrhage did not occur in this study. When stents were removed on the first postoperative day because of an insufficient IOP decrease, the mean IOP decreased significantly from 42.0 mmHg to 14.1 mmHg (p<0.001) after 1 hour. The most common complication was hyphema, which occurred in 17.9%. Conclusions Hypotony-related early complications requiring surgical intervention were reduced by ligation and external stents in the tube. In addition, early postoperative high IOPs were managed by removing external stents. The described method can prevent postoperative hypotony after AGV implantation and showed long-term success rates comparable to those reported previously. PMID:19568356

  1. Clinical outcomes of Ahmed glaucoma valve implantation using tube ligation and removable external stents.

    PubMed

    Lee, Jong Joo; Park, Ki Ho; Kim, Dong Myung; Kim, Tae Woo

    2009-06-01

    To investigate the immediate and long-term outcomes of Ahmed glaucoma valve (AGV) implantation with silicone tube ligation and removable external stents. This retrospective non-comparative study investigated the outcomes of AGV implantation with silicone tube ligation and removable external stents in 95 eyes (90 patients) with at least 12 months of postoperative follow-up. Qualified success was defined as an intraocular pressure (IOP) of or=6 mmHg regardless of anti-glaucoma medication. Those who required additional glaucoma surgery, implant removal or who had phthisis bulbi were considered failures. Hypotony was defined as an IOP of <6 mmHg. Mean IOP reduced from 37.1+/-9.7 mmHg preoperatively to 15.2+/-5.6 mmHg at 12 months postoperatively (p<0.001). Qualified success was achieved in 84.2% at 1 year. Hypotony with an IOP of <6 mmHg was seen in 8.4% and an IOP of <5 mmHg in 3.2% on the first postoperative day. No case of hypotony required surgical intervention. Suprachoroidal hemorrhage did not occur in this study. When stents were removed on the first postoperative day because of an insufficient IOP decrease, the mean IOP decreased significantly from 42.0 mmHg to 14.1 mmHg (p<0.001) after 1 hour. The most common complication was hyphema, which occurred in 17.9%. Hypotony-related early complications requiring surgical intervention were reduced by ligation and external stents in the tube. In addition, early postoperative high IOPs were managed by removing external stents. The described method can prevent postoperative hypotony after AGV implantation and showed long-term success rates comparable to those reported previously.

  2. A Markov Random Field Framework for Protein Side-Chain Resonance Assignment

    NASA Astrophysics Data System (ADS)

    Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall

    Nuclear magnetic resonance (NMR) spectroscopy plays a critical role in structural genomics, and serves as a primary tool for determining protein structures, dynamics and interactions in physiologically-relevant solution conditions. The current speed of protein structure determination via NMR is limited by the lengthy time required in resonance assignment, which maps spectral peaks to specific atoms and residues in the primary sequence. Although numerous algorithms have been developed to address the backbone resonance assignment problem [68,2,10,37,14,64,1,31,60], little work has been done to automate side-chain resonance assignment [43, 48, 5]. Most previous attempts in assigning side-chain resonances depend on a set of NMR experiments that record through-bond interactions with side-chain protons for each residue. Unfortunately, these NMR experiments have low sensitivity and limited performance on large proteins, which makes it difficult to obtain enough side-chain resonance assignments. On the other hand, it is essential to obtain almost all of the side-chain resonance assignments as a prerequisite for high-resolution structure determination. To overcome this deficiency, we present a novel side-chain resonance assignment algorithm based on alternative NMR experiments measuring through-space interactions between protons in the protein, which also provide crucial distance restraints and are normally required in high-resolution structure determination. We cast the side-chain resonance assignment problem into a Markov Random Field (MRF) framework, and extend and apply combinatorial protein design algorithms to compute the optimal solution that best interprets the NMR data. Our MRF framework captures the contact map information of the protein derived from NMR spectra, and exploits the structural information available from the backbone conformations determined by orientational restraints and a set of discretized side-chain conformations (i.e., rotamers). A Hausdorff-based computation is employed in the scoring function to evaluate the probability of side-chain resonance assignments to generate the observed NMR spectra. The complexity of the assignment problem is first reduced by using a dead-end elimination (DEE) algorithm, which prunes side-chain resonance assignments that are provably not part of the optimal solution. Then an A* search algorithm is used to find a set of optimal side-chain resonance assignments that best fit the NMR data. We have tested our algorithm on NMR data for five proteins, including the FF Domain 2 of human transcription elongation factor CA150 (FF2), the B1 domain of Protein G (GB1), human ubiquitin, the ubiquitin-binding zinc finger domain of the human Y-family DNA polymerase Eta (pol η UBZ), and the human Set2-Rpb1 interacting domain (hSRI). Our algorithm assigns resonances for more than 90% of the protons in the proteins, and achieves about 80% correct side-chain resonance assignments. The final structures computed using distance restraints resulting from the set of assigned side-chain resonances have backbone RMSD 0.5 - 1.4 Å and all-heavy-atom RMSD 1.0 - 2.2 Å from the reference structures that were determined by X-ray crystallography or traditional NMR approaches. These results demonstrate that our algorithm can be successfully applied to automate side-chain resonance assignment and high-quality protein structure determination. Since our algorithm does not require any specific NMR experiments for measuring the through-bond interactions with side-chain protons, it can save a significant amount of both experimental cost and spectrometer time, and hence accelerate the NMR structure determination process.

  3. Resonance assignment of the NMR spectra of disordered proteins using a multi-objective non-dominated sorting genetic algorithm.

    PubMed

    Yang, Yu; Fritzsching, Keith J; Hong, Mei

    2013-11-01

    A multi-objective genetic algorithm is introduced to predict the assignment of protein solid-state NMR (SSNMR) spectra with partial resonance overlap and missing peaks due to broad linewidths, molecular motion, and low sensitivity. This non-dominated sorting genetic algorithm II (NSGA-II) aims to identify all possible assignments that are consistent with the spectra and to compare the relative merit of these assignments. Our approach is modeled after the recently introduced Monte-Carlo simulated-annealing (MC/SA) protocol, with the key difference that NSGA-II simultaneously optimizes multiple assignment objectives instead of searching for possible assignments based on a single composite score. The multiple objectives include maximizing the number of consistently assigned peaks between multiple spectra ("good connections"), maximizing the number of used peaks, minimizing the number of inconsistently assigned peaks between spectra ("bad connections"), and minimizing the number of assigned peaks that have no matching peaks in the other spectra ("edges"). Using six SSNMR protein chemical shift datasets with varying levels of imperfection that was introduced by peak deletion, random chemical shift changes, and manual peak picking of spectra with moderately broad linewidths, we show that the NSGA-II algorithm produces a large number of valid and good assignments rapidly. For high-quality chemical shift peak lists, NSGA-II and MC/SA perform similarly well. However, when the peak lists contain many missing peaks that are uncorrelated between different spectra and have chemical shift deviations between spectra, the modified NSGA-II produces a larger number of valid solutions than MC/SA, and is more effective at distinguishing good from mediocre assignments by avoiding the hazard of suboptimal weighting factors for the various objectives. These two advantages, namely diversity and better evaluation, lead to a higher probability of predicting the correct assignment for a larger number of residues. On the other hand, when there are multiple equally good assignments that are significantly different from each other, the modified NSGA-II is less efficient than MC/SA in finding all the solutions. This problem is solved by a combined NSGA-II/MC algorithm, which appears to have the advantages of both NSGA-II and MC/SA. This combination algorithm is robust for the three most difficult chemical shift datasets examined here and is expected to give the highest-quality de novo assignment of challenging protein NMR spectra.

  4. Rabies in southeast Brazil: a change in the epidemiological pattern.

    PubMed

    Queiroz, Luzia Helena; Favoretto, Silvana Regina; Cunha, Elenice Maria S; Campos, Angélica Cristine A; Lopes, Marissol Cardoso; de Carvalho, Cristiano; Iamamoto, Keila; Araújo, Danielle Bastos; Venditti, Leandro Lima R; Ribeiro, Erica S; Pedro, Wagner André; Durigon, Edison Luiz

    2012-01-01

    This epidemiological study was conducted using antigenic and genetic characterisation of rabies virus isolates obtained from different animal species in the southeast of Brazil from 1993 to 2007. An alteration in the epidemiological profile was observed. One hundred two samples were tested using a panel of eight monoclonal antibodies, and 94 were genetically characterised by sequencing the nucleoprotein gene. From 1993 to 1997, antigenic variant 2 (AgV-2), related to a rabies virus maintained in dog populations, was responsible for rabies cases in dogs, cats, cattle and horses. Antigenic variant 3 (AgV-3), associated with Desmodus rotundus, was detected in a few cattle samples from rural areas. From 1998 to 2007, rabies virus was detected in bats and urban pets, and four distinct variants were identified. A nucleotide similarity analysis resulted in two primary groups comprising the dog and bat antigenic variants and showing the distinct endemic cycles maintained in the different animal species in this region.

  5. Case study of rotating sonar sensor application in unmanned automated guided vehicle

    NASA Astrophysics Data System (ADS)

    Chandak, Pravin; Cao, Ming; Hall, Ernest L.

    2001-10-01

    A single rotating sonar element is used with a restricted angle of sweep to obtain readings to develop a range map for the unobstructed path of an autonomous guided vehicle (AGV). A Polaroid ultrasound transducer element is mounted on a micromotor with an encoder feedback. The motion of this motor is controlled using a Galil DMC 1000 motion control board. The encoder is interfaced with the DMC 1000 board using an intermediate IMC 1100 break-out board. By adjusting the parameters of the Polaroid element, it is possible to obtain range readings at known angles with respect to the center of the robot. The readings are mapped to obtain a range map of the unobstructed path in front of the robot. The idea can be extended to a 360 degree mapping by changing the assembly level programming on the Galil Motion control board. Such a system would be compact and reliable over a range of environments and AGV applications.

  6. Evaluation of Dynamic Channel and Power Assignment for Cognitive Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syed A. Ahmad; Umesh Shukla; Ryan E. Irwin

    2011-03-01

    In this paper, we develop a unifying optimization formulation to describe the Dynamic Channel and Power Assignment (DCPA) problem and evaluation method for comparing DCPA algorithms. DCPA refers to the allocation of transmit power and frequency channels to links in a cognitive network so as to maximize the total number of feasible links while minimizing the aggregate transmit power. We apply our evaluation method to five algorithms representative of DCPA used in literature. This comparison illustrates the tradeoffs between control modes (centralized versus distributed) and channel/power assignment techniques. We estimate the complexity of each algorithm. Through simulations, we evaluate themore » effectiveness of the algorithms in achieving feasible link allocations in the network, as well as their power efficiency. Our results indicate that, when few channels are available, the effectiveness of all algorithms is comparable and thus the one with smallest complexity should be selected. The Least Interfering Channel and Iterative Power Assignment (LICIPA) algorithm does not require cross-link gain information, has the overall lowest run time, and highest feasibility ratio of all the distributed algorithms; however, this comes at a cost of higher average power per link.« less

  7. Dynamic traffic assignment : genetic algorithms approach

    DOT National Transportation Integrated Search

    1997-01-01

    Real-time route guidance is a promising approach to alleviating congestion on the nations highways. A dynamic traffic assignment model is central to the development of guidance strategies. The artificial intelligence technique of genetic algorithm...

  8. Network reliability maximization for stochastic-flow network subject to correlated failures using genetic algorithm and tabu\\xA0search

    NASA Astrophysics Data System (ADS)

    Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun

    2018-07-01

    Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.

  9. Evaluation of Automatically Assigned Job-Specific Interview Modules

    PubMed Central

    Friesen, Melissa C.; Lan, Qing; Ge, Calvin; Locke, Sarah J.; Hosgood, Dean; Fritschi, Lin; Sadkowsky, Troy; Chen, Yu-Cheng; Wei, Hu; Xu, Jun; Lam, Tai Hing; Kwong, Yok Lam; Chen, Kexin; Xu, Caigang; Su, Yu-Chieh; Chiu, Brian C. H.; Ip, Kai Ming Dennis; Purdue, Mark P.; Bassig, Bryan A.; Rothman, Nat; Vermeulen, Roel

    2016-01-01

    Objective: In community-based epidemiological studies, job- and industry-specific ‘modules’ are often used to systematically obtain details about the subject’s work tasks. The module assignment is often made by the interviewer, who may have insufficient occupational hygiene knowledge to assign the correct module. We evaluated, in the context of a case–control study of lymphoid neoplasms in Asia (‘AsiaLymph’), the performance of an algorithm that provided automatic, real-time module assignment during a computer-assisted personal interview. Methods: AsiaLymph’s occupational component began with a lifetime occupational history questionnaire with free-text responses and three solvent exposure screening questions. To assign each job to one of 23 study-specific modules, an algorithm automatically searched the free-text responses to the questions ‘job title’ and ‘product made or services provided by employer’ using a list of module-specific keywords, comprising over 5800 keywords in English, Traditional and Simplified Chinese. Hierarchical decision rules were used when the keyword match triggered multiple modules. If no keyword match was identified, a generic solvent module was assigned if the subject responded ‘yes’ to any of the three solvent screening questions. If these question responses were all ‘no’, a work location module was assigned, which redirected the subject to the farming, teaching, health professional, solvent, or industry solvent modules or ended the questions for that job, depending on the location response. We conducted a reliability assessment that compared the algorithm-assigned modules to consensus module assignments made by two industrial hygienists for a subset of 1251 (of 11409) jobs selected using a stratified random selection procedure using module-specific strata. Discordant assignments between the algorithm and consensus assignments (483 jobs) were qualitatively reviewed by the hygienists to evaluate the potential information lost from missed questions with using the algorithm-assigned module (none, low, medium, high). Results: The most frequently assigned modules were the work location (33%), solvent (20%), farming and food industry (19%), and dry cleaning and textile industry (6.4%) modules. In the reliability subset, the algorithm assignment had an exact match to the expert consensus-assigned module for 722 (57.7%) of the 1251 jobs. Overall, adjusted for the proportion of jobs in each stratum, we estimated that 86% of the algorithm-assigned modules would result in no information loss, 2% would have low information loss, and 12% would have medium to high information loss. Medium to high information loss occurred for <10% of the jobs assigned the generic solvent module and for 21, 32, and 31% of the jobs assigned the work location module with location responses of ‘someplace else’, ‘factory’, and ‘don’t know’, respectively. Other work location responses had ≤8% with medium to high information loss because of redirections to other modules. Medium to high information loss occurred more frequently when a job description matched with multiple keywords pointing to different modules (29–69%, depending on the triggered assignment rule). Conclusions: These evaluations demonstrated that automatically assigned modules can reliably reproduce an expert’s module assignment without the direct involvement of an industrial hygienist or interviewer. The feasibility of adapting this framework to other studies will be language- and exposure-specific. PMID:27250109

  10. Evaluation of Automatically Assigned Job-Specific Interview Modules.

    PubMed

    Friesen, Melissa C; Lan, Qing; Ge, Calvin; Locke, Sarah J; Hosgood, Dean; Fritschi, Lin; Sadkowsky, Troy; Chen, Yu-Cheng; Wei, Hu; Xu, Jun; Lam, Tai Hing; Kwong, Yok Lam; Chen, Kexin; Xu, Caigang; Su, Yu-Chieh; Chiu, Brian C H; Ip, Kai Ming Dennis; Purdue, Mark P; Bassig, Bryan A; Rothman, Nat; Vermeulen, Roel

    2016-08-01

    In community-based epidemiological studies, job- and industry-specific 'modules' are often used to systematically obtain details about the subject's work tasks. The module assignment is often made by the interviewer, who may have insufficient occupational hygiene knowledge to assign the correct module. We evaluated, in the context of a case-control study of lymphoid neoplasms in Asia ('AsiaLymph'), the performance of an algorithm that provided automatic, real-time module assignment during a computer-assisted personal interview. AsiaLymph's occupational component began with a lifetime occupational history questionnaire with free-text responses and three solvent exposure screening questions. To assign each job to one of 23 study-specific modules, an algorithm automatically searched the free-text responses to the questions 'job title' and 'product made or services provided by employer' using a list of module-specific keywords, comprising over 5800 keywords in English, Traditional and Simplified Chinese. Hierarchical decision rules were used when the keyword match triggered multiple modules. If no keyword match was identified, a generic solvent module was assigned if the subject responded 'yes' to any of the three solvent screening questions. If these question responses were all 'no', a work location module was assigned, which redirected the subject to the farming, teaching, health professional, solvent, or industry solvent modules or ended the questions for that job, depending on the location response. We conducted a reliability assessment that compared the algorithm-assigned modules to consensus module assignments made by two industrial hygienists for a subset of 1251 (of 11409) jobs selected using a stratified random selection procedure using module-specific strata. Discordant assignments between the algorithm and consensus assignments (483 jobs) were qualitatively reviewed by the hygienists to evaluate the potential information lost from missed questions with using the algorithm-assigned module (none, low, medium, high). The most frequently assigned modules were the work location (33%), solvent (20%), farming and food industry (19%), and dry cleaning and textile industry (6.4%) modules. In the reliability subset, the algorithm assignment had an exact match to the expert consensus-assigned module for 722 (57.7%) of the 1251 jobs. Overall, adjusted for the proportion of jobs in each stratum, we estimated that 86% of the algorithm-assigned modules would result in no information loss, 2% would have low information loss, and 12% would have medium to high information loss. Medium to high information loss occurred for <10% of the jobs assigned the generic solvent module and for 21, 32, and 31% of the jobs assigned the work location module with location responses of 'someplace else', 'factory', and 'don't know', respectively. Other work location responses had ≤8% with medium to high information loss because of redirections to other modules. Medium to high information loss occurred more frequently when a job description matched with multiple keywords pointing to different modules (29-69%, depending on the triggered assignment rule). These evaluations demonstrated that automatically assigned modules can reliably reproduce an expert's module assignment without the direct involvement of an industrial hygienist or interviewer. The feasibility of adapting this framework to other studies will be language- and exposure-specific. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2016.

  11. Application of a fast sorting algorithm to the assignment of mass spectrometric cross-linking data.

    PubMed

    Petrotchenko, Evgeniy V; Borchers, Christoph H

    2014-09-01

    Cross-linking combined with MS involves enzymatic digestion of cross-linked proteins and identifying cross-linked peptides. Assignment of cross-linked peptide masses requires a search of all possible binary combinations of peptides from the cross-linked proteins' sequences, which becomes impractical with increasing complexity of the protein system and/or if digestion enzyme specificity is relaxed. Here, we describe the application of a fast sorting algorithm to search large sequence databases for cross-linked peptide assignments based on mass. This same algorithm has been used previously for assigning disulfide-bridged peptides (Choi et al., ), but has not previously been applied to cross-linking studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. AUTOBA: automation of backbone assignment from HN(C)N suite of experiments.

    PubMed

    Borkar, Aditi; Kumar, Dinesh; Hosur, Ramakrishna V

    2011-07-01

    Development of efficient strategies and automation represent important milestones of progress in rapid structure determination efforts in proteomics research. In this context, we present here an efficient algorithm named as AUTOBA (Automatic Backbone Assignment) designed to automate the assignment protocol based on HN(C)N suite of experiments. Depending upon the spectral dispersion, the user can record 2D or 3D versions of the experiments for assignment. The algorithm uses as inputs: (i) protein primary sequence and (ii) peak-lists from user defined HN(C)N suite of experiments. In the end, one gets H(N), (15)N, C(α) and C' assignments (in common BMRB format) for the individual residues along the polypeptide chain. The success of the algorithm has been demonstrated, not only with experimental spectra recorded on two small globular proteins: ubiquitin (76 aa) and M-crystallin (85 aa), but also with simulated spectra of 27 other proteins using assignment data from the BMRB.

  13. Analysis of labor employment assessment on production machine to minimize time production

    NASA Astrophysics Data System (ADS)

    Hernawati, Tri; Suliawati; Sari Gumay, Vita

    2018-03-01

    Every company both in the field of service and manufacturing always trying to pass efficiency of it’s resource use. One resource that has an important role is labor. Labor has different efficiency levels for different jobs anyway. Problems related to the optimal allocation of labor that has different levels of efficiency for different jobs are called assignment problems, which is a special case of linear programming. In this research, Analysis of Labor Employment Assesment on Production Machine to Minimize Time Production, in PT PDM is done by using Hungarian algorithm. The aim of the research is to get the assignment of optimal labor on production machine to minimize time production. The results showed that the assignment of existing labor is not suitable because the time of completion of the assignment is longer than the assignment by using the Hungarian algorithm. By applying the Hungarian algorithm obtained time savings of 16%.

  14. Simulated annealing algorithm for solving chambering student-case assignment problem

    NASA Astrophysics Data System (ADS)

    Ghazali, Saadiah; Abdul-Rahman, Syariza

    2015-12-01

    The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.

  15. More reliable protein NMR peak assignment via improved 2-interval scheduling.

    PubMed

    Chen, Zhi-Zhong; Lin, Guohui; Rizzi, Romeo; Wen, Jianjun; Xu, Dong; Xu, Ying; Jiang, Tao

    2005-03-01

    Protein NMR peak assignment refers to the process of assigning a group of "spin systems" obtained experimentally to a protein sequence of amino acids. The automation of this process is still an unsolved and challenging problem in NMR protein structure determination. Recently, protein NMR peak assignment has been formulated as an interval scheduling problem (ISP), where a protein sequence P of amino acids is viewed as a discrete time interval I (the amino acids on P one-to-one correspond to the time units of I), each subset S of spin systems that are known to originate from consecutive amino acids from P is viewed as a "job" j(s), the preference of assigning S to a subsequence P of consecutive amino acids on P is viewed as the profit of executing job j(s) in the subinterval of I corresponding to P, and the goal is to maximize the total profit of executing the jobs (on a single machine) during I. The interval scheduling problem is max SNP-hard in general; but in the real practice of protein NMR peak assignment, each job j(s) usually requires at most 10 consecutive time units, and typically the jobs that require one or two consecutive time units are the most difficult to assign/schedule. In order to solve these most difficult assignments, we present an efficient 13/7-approximation algorithm for the special case of the interval scheduling problem where each job takes one or two consecutive time units. Combining this algorithm with a greedy filtering strategy for handling long jobs (i.e., jobs that need more than two consecutive time units), we obtain a new efficient heuristic for protein NMR peak assignment. Our experimental study shows that the new heuristic produces the best peak assignment in most of the cases, compared with the NMR peak assignment algorithms in the recent literature. The above algorithm is also the first approximation algorithm for a nontrivial case of the well-known interval scheduling problem that breaks the ratio 2 barrier.

  16. Rabies Virus in Bats, State of Pará, Brazil, 2005-2011.

    PubMed

    Pereira, Armando de Souza; Casseb, Livia Medeiros Neves; Barbosa, Taciana Fernandes Souza; Begot, Alberto Lopes; Brito, Roberto Messias Oliveira; Vasconcelos, Pedro Fernando da Costa; Travassos da Rosa, Elizabeth Salbé

    2017-08-01

    Rabies is an acute, progressive zoonotic viral infection that in general produces a fatal outcome. This disease is responsible for deaths in humans and animals worldwide and, because it can affect all mammals, is considered one of the most important viral infections for public health. This study aimed to determine the prevalence of rabies in bats of different species found in municipalities of the state of Pará from 2005 to 2011. The rabies virus was detected in 12 (0.39%) bats in a total of 3100 analyzed, including hematophagous, frugivorous, and insectivorous bats. Of these, eleven were characterized as AgV3, which is characteristic of the hematophagous bat Desmodus rotundus (E. Geoffroy 1810); one insectivorous animal showed a different profile compatible with the Eptesicus pattern and may therefore be a new antigenic variant. This study identified the need for greater intensification of epidemiological surveillance in municipalities lacking rabies surveillance (silent areas); studies of rabies virus in bats with different alimentary habits, studies investigating the prevalence of AgV3, and prophylactic measures in areas where humans may be infected are also needed.

  17. Adaptive protection algorithm and system

    DOEpatents

    Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA

    2009-04-28

    An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.

  18. A Genetic-Based Scheduling Algorithm to Minimize the Makespan of the Grid Applications

    NASA Astrophysics Data System (ADS)

    Entezari-Maleki, Reza; Movaghar, Ali

    Task scheduling algorithms in grid environments strive to maximize the overall throughput of the grid. In order to maximize the throughput of the grid environments, the makespan of the grid tasks should be minimized. In this paper, a new task scheduling algorithm is proposed to assign tasks to the grid resources with goal of minimizing the total makespan of the tasks. The algorithm uses the genetic approach to find the suitable assignment within grid resources. The experimental results obtained from applying the proposed algorithm to schedule independent tasks within grid environments demonstrate the applicability of the algorithm in achieving schedules with comparatively lower makespan in comparison with other well-known scheduling algorithms such as, Min-min, Max-min, RASA and Sufferage algorithms.

  19. Outcomes of Ahmed glaucoma valve implantation in advanced primary congenital glaucoma with previous surgical failure.

    PubMed

    Huang, Jingjing; Lin, Jialiu; Wu, Ziqiang; Xu, Hongzhi; Zuo, Chengguo; Ge, Jian

    2015-01-01

    The purpose of this study was to evaluate the intermediate surgical results of Ahmed glaucoma valve (AGV) implantation in patients less than 7 years of age, with advanced primary congenital glaucoma who have failed previous surgeries. Consecutive patients with advanced primary congenital glaucoma that failed previous operations and had undergone subsequent AGV implantation were evaluated retrospectively. Surgical success was defined as 1) intraocular pressure (IOP) ≥6 and ≤21 mmHg; 2) IOP reduction of at least 30% relative to preoperative values; and 3) without the need for additional surgical intervention for IOP control, loss of light perception, or serious complications. Fourteen eyes of eleven patients were studied. Preoperatively, the average axial length was 27.71±1.52 (25.56-30.80) mm, corneal diameter was 14.71±1.07 (13.0-16.0) mm, cup-to-disc ratio was 0.95±0.04 (0.9-1.0), and IOP was 39.5±5.7 (30-55) mmHg. The mean follow-up time was 18.29±10.96 (5-44, median 18) months. There were significant reductions in IOPs and the number of glaucoma medications (P<0.001) postoperatively. The IOPs after operation were 11.3±3.4, 13.6±5.1, 16.3±2.7, and 16.1±2.6 mmHg at 1 month, 6 months, 12 months, and 18 months, respectively. Kaplan-Meier estimates of the cumulative probability of valve success were 85.7%, 71.4%, and 71.4% at 6, 12, and 18 months, respectively. Severe surgical complications, including erosion of tube, endophthalmitis, retinal detachment, choroidal detachment, and delayed suprachoroidal hemorrhage, occurred in 28.6% cases. AGV implantation remains a viable option for patients with advanced primary congenital glaucoma unresponsive to previous surgical intervention, despite a relatively high incidence of severe surgical complications.

  20. Long-term outcomes of uveitic glaucoma treated with Ahmed valve implant in a series of Chinese patients.

    PubMed

    Bao, Ning; Jiang, Zheng-Xuan; Coh, Paul; Tao, Li-Ming

    2018-01-01

    To report long-term outcomes of secondary glaucoma due to uveitis treated with Ahmed glaucoma valve (AGV) implantation in a series of Chinese patients. The retrospective study included 67 eyes from 56 patients with uveitic glaucoma who underwent AGV implantation. Success of the treatment was defined as patients achieving intraocular pressure (IOP) levels between 6 and 21 mm Hg with or without additional anti-glaucoma medications and/or a minimum of 20% reduction from baseline IOP. The main outcome measurements included IOP, the number of glaucoma medications at 1, 3, 6, 12, 24, 36, 48 and 60mo after surgery, surgical complications, final best-corrected vision acuity (BCVA), visual field (VF) and retinal nerve fiber layer (RNFL). The mean follow-up was 53.3±8.5 (range 48 to 60)mo. The cumulative probability of success rate was 98.5%, 95.5%, 89.6%, 83.6%, 76.1%, 70.1%, 65.7% and 61.2% at 1, 3, 6, 12, 24, 36, 48 and 60mo, respectively. IOP was reduced from a baseline of 30.8±6.8 to 9.9±4.1, 10.1±4.2, 10.9±3.7, 12.9±4.6, 13.8±3.9, 13.2±4.6, 12.3±3.5 and 13.1±3.7 mm Hg at 1, 3, 6, 12, 24, 36, 48 and 60mo, respectively ( P <0.01). The number of postoperative glaucoma medications was significantly decreased compared with baseline at all time points during the study period ( P <0.05). There was no significant difference between preoperative and postoperative BCVA. Remarkable surgical complications were not found after surgery. The VF and RNFL of the patients were stable after the surgery. AGV implantation is safe and effect in terms of reducing IOP, decreasing the number of glaucoma medications, and preserving vision for patients with uveitic glaucoma.

  1. Use of Mitomycin C to reduce the incidence of encapsulated cysts following ahmed glaucoma valve implantation in refractory glaucoma patients: a new technique.

    PubMed

    Zhou, Minwen; Wang, Wei; Huang, Wenbin; Zhang, Xiulan

    2014-09-06

    To evaluate the surgical outcome of Ahmed glaucoma valve (AGV) implantation with a new technique of mitomycin C (MMC) application. This is a retrospective study. All patients with refractory glaucoma underwent FP-7 AGV implantation. Two methods of MMC application were used. In the traditional technique, 6 × 4 mm cotton soaked with MMC (0.25-0.33 mg/ml) was placed in the implantation area for 2-5mins; in the new technique, the valve plate first was encompassed with a thin layer of cotton soaked with MMC, then inserted into the same area. A 200 ml balanced salt solution was applied for irrigation of MMC. The surgical success rate, intraocular pressure (IOP), number of anti-glaucoma medications used, and postoperative complications were analyzed between the groups. The surgical outcomes of two MMC applied techniques were compared. The new technique group had only one case (2.6%) of encapsulated cyst formation out of 38 eyes, while there were eight (19.5%) cases out of 41 eyes the in traditional group. The difference was statistically significant (P = 0.030). According to the definition of success rate, there was 89.5% in the new technique group and 70.7% in the traditional group at the follow-up end point. There was a significant difference between the two groups (P = 0.035). Mean IOP in the new technique group were significantly lower than those of the traditional group at 3 and 6 months (P < 0.05). By using a thin layer of cotton soaked with MMC to encompass the valve plate, the new MMC application technique could greatly decrease the incidence of encapsulated cyst and increase the success rate following AGV implantation.

  2. Outcomes of Ahmed glaucoma valve implantation in advanced primary congenital glaucoma with previous surgical failure

    PubMed Central

    Huang, Jingjing; Lin, Jialiu; Wu, Ziqiang; Xu, Hongzhi; Zuo, Chengguo; Ge, Jian

    2015-01-01

    Purpose The purpose of this study was to evaluate the intermediate surgical results of Ahmed glaucoma valve (AGV) implantation in patients less than 7 years of age, with advanced primary congenital glaucoma who have failed previous surgeries. Patients and methods Consecutive patients with advanced primary congenital glaucoma that failed previous operations and had undergone subsequent AGV implantation were evaluated retrospectively. Surgical success was defined as 1) intraocular pressure (IOP) ≥6 and ≤21 mmHg; 2) IOP reduction of at least 30% relative to preoperative values; and 3) without the need for additional surgical intervention for IOP control, loss of light perception, or serious complications. Results Fourteen eyes of eleven patients were studied. Preoperatively, the average axial length was 27.71±1.52 (25.56–30.80) mm, corneal diameter was 14.71±1.07 (13.0–16.0) mm, cup-to-disc ratio was 0.95±0.04 (0.9–1.0), and IOP was 39.5±5.7 (30–55) mmHg. The mean follow-up time was 18.29±10.96 (5–44, median 18) months. There were significant reductions in IOPs and the number of glaucoma medications (P<0.001) postoperatively. The IOPs after operation were 11.3±3.4, 13.6±5.1, 16.3±2.7, and 16.1±2.6 mmHg at 1 month, 6 months, 12 months, and 18 months, respectively. Kaplan–Meier estimates of the cumulative probability of valve success were 85.7%, 71.4%, and 71.4% at 6, 12, and 18 months, respectively. Severe surgical complications, including erosion of tube, endophthalmitis, retinal detachment, choroidal detachment, and delayed suprachoroidal hemorrhage, occurred in 28.6% cases. Conclusion AGV implantation remains a viable option for patients with advanced primary congenital glaucoma unresponsive to previous surgical intervention, despite a relatively high incidence of severe surgical complications. PMID:26082610

  3. Clinical efficacy analysis of Ahmed glaucoma valve implantation in neovascular glaucoma and influencing factors

    PubMed Central

    He, Ye; Tian, Ying; Song, Weitao; Su, Ting; Jiang, Haibo; Xia, Xiaobo

    2017-01-01

    Abstract This study aimed to evaluate the efficacy of Ahmed glaucoma valve (AGV) implantation in treating neovascular glaucoma (NVG) and to analyze the factors influencing the surgical success rate. This is a retrospective review of 40 eyes of 40 NVG patients who underwent AGV implantation at Xiangya Hospital of Central South University, China, between January 2014 and December 2016. Pre- and postoperative intraocular pressure (IOP), visual acuity, surgical success rate, medications, and complications were observed. Surgical success criteria were defined as IOP ≤21 and >6 mm Hg with or without additional medications. Kaplan–Meier survival curves and Multivariate cox regression analysis were used to examine success rates and risk factors for surgical outcomes. The mean follow-up period was 8.88 ± 3.12 months (range: 3–17). IOP declined at each visit postoperatively and it was statistically significant (P < .001). An average of 3.55 ± 0.86 drugs was applied preoperatively, while an average of 0.64 ± 0.90 drugs was used postoperatively, with the difference being of statistical significance (P < .05). The complete surgical success rate of 3, 6, and 12 months after the operation was 85%, 75%, and 65%, respectively. Meanwhile, the qualified success rate of 3, 6, and 12 months after the operation was 85%, 80%, and 77.5%, respectively. The multivariate cox regression analysis showed that age (hazard ratio: 3.717, 7.246; 95% confidence interval: 1.149–12.048, 1.349–38.461; P = .028, .021) was influencing factors for complete success rate and qualified success rate among all NVG patients. Gender, previous operation history, primary disease, and preoperative IOP were found to be not significant. AGV implantation is an effective and safe surgical method to treat NVG. Age is an important factor influencing the surgical success rate. PMID:29049253

  4. Outcome of Descemet stripping automated endothelial keratoplasty in eyes with an Ahmed glaucoma valve.

    PubMed

    Chiam, Patrick J; Cheeseman, Robert; Ho, Vivian W; Romano, Vito; Choudhary, Anshoo; Batterbury, Mark; Kaye, Stephen B; Willoughby, Colin E

    2017-05-01

    The purpose was to investigate the survival of Descemet stripping automated endothelial keratoplasty (DSAEK) in eyes with an Ahmed glaucoma valve (AGV). The study had a retrospective case-series of patients with an AGV in the anterior chamber undergoing a DSAEK. Included in the analysis were graft size, number of previous operations, post-operative glaucoma medications, post-operative intraocular pressure (IOP) control, graft size and donor factors (age, endothelial cell density, and post-mortem time). A generalised linear model with binary logistic regression was used to test for an effect on graft survival at 1 year and 1.5 years. Fourteen eyes from 13 patients were included. The survival rate of the first DSAEK at 6, 12, 18, 24 and 30-months was 85%, 71%, 50%, 36% and 30%, respectively. The mean duration to graft failure was 12.9 ± 6.2 months. Five of the seven failed first grafts went on to have a repeat DSAEK. The mean follow-up in this subgroup was 30.7 ± 18.4 months. The survival rate of second DSAEK at 6, 12, 18 and 24 months was 100% (5/5), 100% (5/5), 75% (3/4) and 67% (2/3). Only one second DSAEK failed in the duration of the study and went on to receive a third DSAEK which failed at 18-months. The mean IOP within the first year was significantly lower for grafts that survived at 1 and 1.5 years (17.4 mmHg, 16.9 mmHg) than for grafts that failed (19.4 mmHg, 19.4 mmHg) (p = 0.04, p = 0.009). DSAEK is a viable alternative to PK to restore visual function in eyes with an AGV sited in the anterior chamber. IOP is an important risk factor for graft failure.

  5. Integrated consensus-based frameworks for unmanned vehicle routing and targeting assignment

    NASA Astrophysics Data System (ADS)

    Barnawi, Waleed T.

    Unmanned aerial vehicles (UAVs) are increasingly deployed in complex and dynamic environments to perform multiple tasks cooperatively with other UAVs that contribute to overarching mission effectiveness. Studies by the Department of Defense (DoD) indicate future operations may include anti-access/area-denial (A2AD) environments which limit human teleoperator decision-making and control. This research addresses the problem of decentralized vehicle re-routing and task reassignments through consensus-based UAV decision-making. An Integrated Consensus-Based Framework (ICF) is formulated as a solution to the combined single task assignment problem and vehicle routing problem. The multiple assignment and vehicle routing problem is solved with the Integrated Consensus-Based Bundle Framework (ICBF). The frameworks are hierarchically decomposed into two levels. The bottom layer utilizes the renowned Dijkstra's Algorithm. The top layer addresses task assignment with two methods. The single assignment approach is called the Caravan Auction Algorithm (CarA) Algorithm. This technique extends the Consensus-Based Auction Algorithm (CBAA) to provide awareness for task completion by agents and adopt abandoned tasks. The multiple assignment approach called the Caravan Auction Bundle Algorithm (CarAB) extends the Consensus-Based Bundle Algorithm (CBBA) by providing awareness for lost resources, prioritizing remaining tasks, and adopting abandoned tasks. Research questions are investigated regarding the novelty and performance of the proposed frameworks. Conclusions regarding the research questions will be provided through hypothesis testing. Monte Carlo simulations will provide evidence to support conclusions regarding the research hypotheses for the proposed frameworks. The approach provided in this research addresses current and future military operations for unmanned aerial vehicles. However, the general framework implied by the proposed research is adaptable to any unmanned vehicle. Civil applications that involve missions where human observability would be limited could benefit from the independent UAV task assignment, such as exploration and fire surveillance are also notable uses for this approach.

  6. Deducing chemical structure from crystallographically determined atomic coordinates

    PubMed Central

    Bruno, Ian J.; Shields, Gregory P.; Taylor, Robin

    2011-01-01

    An improved algorithm has been developed for assigning chemical structures to incoming entries to the Cambridge Structural Database, using only the information available in the deposited CIF. Steps in the algorithm include detection of bonds, selection of polymer unit, resolution of disorder, and assignment of bond types and formal charges. The chief difficulty is posed by the large number of metallo-organic crystal structures that must be processed, given our aspiration that assigned chemical structures should accurately reflect properties such as the oxidation states of metals and redox-active ligands, metal coordination numbers and hapticities, and the aromaticity or otherwise of metal ligands. Other complications arise from disorder, especially when it is symmetry imposed or modelled with the SQUEEZE algorithm. Each assigned structure is accompanied by an estimate of reliability and, where necessary, diagnostic information indicating probable points of error. Although the algorithm was written to aid building of the Cambridge Structural Database, it has the potential to develop into a general-purpose tool for adding chemical information to newly determined crystal structures. PMID:21775812

  7. Autonomous Guidance Strategy for Spacecraft Formations and Reconfiguration Maneuvers

    NASA Astrophysics Data System (ADS)

    Wahl, Theodore P.

    A guidance strategy for autonomous spacecraft formation reconfiguration maneuvers is presented. The guidance strategy is presented as an algorithm that solves the linked assignment and delivery problems. The assignment problem is the task of assigning the member spacecraft of the formation to their new positions in the desired formation geometry. The guidance algorithm uses an auction process (also called an "auction algorithm''), presented in the dissertation, to solve the assignment problem. The auction uses the estimated maneuver and time of flight costs between the spacecraft and targets to create assignments which minimize a specific "expense'' function for the formation. The delivery problem is the task of delivering the spacecraft to their assigned positions, and it is addressed through one of two guidance schemes described in this work. The first is a delivery scheme based on artificial potential function (APF) guidance. APF guidance uses the relative distances between the spacecraft, targets, and any obstacles to design maneuvers based on gradients of potential fields. The second delivery scheme is based on model predictive control (MPC); this method uses a model of the system dynamics to plan a series of maneuvers designed to minimize a unique cost function. The guidance algorithm uses an analytic linearized approximation of the relative orbital dynamics, the Yamanaka-Ankersen state transition matrix, in the auction process and in both delivery methods. The proposed guidance strategy is successful, in simulations, in autonomously assigning the members of the formation to new positions and in delivering the spacecraft to these new positions safely using both delivery methods. This guidance algorithm can serve as the basis for future autonomous guidance strategies for spacecraft formation missions.

  8. Integer Linear Programming for Constrained Multi-Aspect Committee Review Assignment

    PubMed Central

    Karimzadehgan, Maryam; Zhai, ChengXiang

    2011-01-01

    Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. A general setup of the review assignment problem involves assigning a set of reviewers on a committee to a set of documents to be reviewed under the constraint of review quota so that the reviewers assigned to a document can collectively cover multiple topic aspects of the document. No previous work has addressed such a setup of committee review assignments while also considering matching multiple aspects of topics and expertise. In this paper, we tackle the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using a multi-aspect review assignment test set constructed using ACM SIGIR publications shows that the proposed algorithm is effective and efficient for committee review assignments based on multi-aspect expertise matching. PMID:22711970

  9. The airport gate assignment problem: a survey.

    PubMed

    Bouras, Abdelghani; Ghaleb, Mageed A; Suryahatmaja, Umar S; Salem, Ahmed M

    2014-01-01

    The airport gate assignment problem (AGAP) is one of the most important problems operations managers face daily. Many researches have been done to solve this problem and tackle its complexity. The objective of the task is assigning each flight (aircraft) to an available gate while maximizing both conveniences to passengers and the operational efficiency of airport. This objective requires a solution that provides the ability to change and update the gate assignment data on a real time basis. In this paper, we survey the state of the art of these problems and the various methods to obtain the solution. Our survey covers both theoretical and real AGAP with the description of mathematical formulations and resolution methods such as exact algorithms, heuristic algorithms, and metaheuristic algorithms. We also provide a research trend that can inspire researchers about new problems in this area.

  10. The Airport Gate Assignment Problem: A Survey

    PubMed Central

    Ghaleb, Mageed A.; Salem, Ahmed M.

    2014-01-01

    The airport gate assignment problem (AGAP) is one of the most important problems operations managers face daily. Many researches have been done to solve this problem and tackle its complexity. The objective of the task is assigning each flight (aircraft) to an available gate while maximizing both conveniences to passengers and the operational efficiency of airport. This objective requires a solution that provides the ability to change and update the gate assignment data on a real time basis. In this paper, we survey the state of the art of these problems and the various methods to obtain the solution. Our survey covers both theoretical and real AGAP with the description of mathematical formulations and resolution methods such as exact algorithms, heuristic algorithms, and metaheuristic algorithms. We also provide a research trend that can inspire researchers about new problems in this area. PMID:25506074

  11. Distributed resource allocation under communication constraints

    NASA Astrophysics Data System (ADS)

    Dodin, Pierre; Nimier, Vincent

    2001-03-01

    This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.

  12. Assignment Of Finite Elements To Parallel Processors

    NASA Technical Reports Server (NTRS)

    Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.

    1990-01-01

    Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.

  13. Y-chromosomal haplogroup distribution in the Tuzla Canton of Bosnia and Herzegovina: A concordance study using four different in silico assignment algorithms based on Y-STR data.

    PubMed

    Dogan, S; Babic, N; Gurkan, C; Goksu, A; Marjanovic, D; Hadziavdic, V

    2016-12-01

    Y-chromosomal haplogroups are sets of ancestrally related paternal lineages, traditionally assigned by the use of Y-chromosomal single nucleotide polymorphism (Y-SNP) markers. An increasingly popular and a less labor-intensive alternative approach has been Y-chromosomal haplogroup assignment based on already available Y-STR data using a variety of different algorithms. In the present study, such in silico haplogroup assignments were made based on 23-loci Y-STR data for 100 unrelated male individuals from the Tuzla Canton, Bosnia and Herzegovina (B&H) using the following four different algorithms: Whit Athey's Haplogroup Predictor, Jim Cullen's World Haplogroup & Haplogroup-I Subclade Predictor, Vadim Urasin's YPredictor and the NevGen Y-DNA Haplogroup Predictor. Prior in-house assessment of these four different algorithms using a previously published dataset (n=132) from B&H with both Y-STR (12-loci) and Y-SNP data suggested haplogroup misassignment rates between 0.76% and 3.02%. Subsequent analyses with the Tuzla Canton population sample revealed only a few differences in the individual haplogroup assignments when using different algorithms. Nevertheless, the resultant Y-chromosomal haplogroup distribution by each method was very similar, where the most prevalent haplogroups observed were I, R and E with their sublineages I2a, R1a and E1b1b, respectively, which is also in accordance with the previously published Y-SNP data for the B&H population. In conclusion, results presented herein not only constitute a concordance study on the four most popular haplogroup assignment algorithms, but they also give a deeper insight into the inter-population differentiation in B&H on the basis of Y haplogroups for the first time. Copyright © 2016 Elsevier GmbH. All rights reserved.

  14. Systematic assignment of thermodynamic constraints in metabolic network models

    PubMed Central

    Kümmel, Anne; Panke, Sven; Heinemann, Matthias

    2006-01-01

    Background The availability of genome sequences for many organisms enabled the reconstruction of several genome-scale metabolic network models. Currently, significant efforts are put into the automated reconstruction of such models. For this, several computational tools have been developed that particularly assist in identifying and compiling the organism-specific lists of metabolic reactions. In contrast, the last step of the model reconstruction process, which is the definition of the thermodynamic constraints in terms of reaction directionalities, still needs to be done manually. No computational method exists that allows for an automated and systematic assignment of reaction directions in genome-scale models. Results We present an algorithm that – based on thermodynamics, network topology and heuristic rules – automatically assigns reaction directions in metabolic models such that the reaction network is thermodynamically feasible with respect to the production of energy equivalents. It first exploits all available experimentally derived Gibbs energies of formation to identify irreversible reactions. As these thermodynamic data are not available for all metabolites, in a next step, further reaction directions are assigned on the basis of network topology considerations and thermodynamics-based heuristic rules. Briefly, the algorithm identifies reaction subsets from the metabolic network that are able to convert low-energy co-substrates into their high-energy counterparts and thus net produce energy. Our algorithm aims at disabling such thermodynamically infeasible cyclic operation of reaction subnetworks by assigning reaction directions based on a set of thermodynamics-derived heuristic rules. We demonstrate our algorithm on a genome-scale metabolic model of E. coli. The introduced systematic direction assignment yielded 130 irreversible reactions (out of 920 total reactions), which corresponds to about 70% of all irreversible reactions that are required to disable thermodynamically infeasible energy production. Conclusion Although not being fully comprehensive, our algorithm for systematic reaction direction assignment could define a significant number of irreversible reactions automatically with low computational effort. We envision that the presented algorithm is a valuable part of a computational framework that assists the automated reconstruction of genome-scale metabolic models. PMID:17123434

  15. A vision-based automated guided vehicle system with marker recognition for indoor use.

    PubMed

    Lee, Jeisung; Hyun, Chang-Ho; Park, Mignon

    2013-08-07

    We propose an intelligent vision-based Automated Guided Vehicle (AGV) system using fiduciary markers. In this paper, we explore a low-cost, efficient vehicle guiding method using a consumer grade web camera and fiduciary markers. In the proposed method, the system uses fiduciary markers with a capital letter or triangle indicating direction in it. The markers are very easy to produce, manipulate, and maintain. The marker information is used to guide a vehicle. We use hue and saturation values in the image to extract marker candidates. When the known size fiduciary marker is detected by using a bird's eye view and Hough transform, the positional relation between the marker and the vehicle can be calculated. To recognize the character in the marker, a distance transform is used. The probability of feature matching was calculated by using a distance transform, and a feature having high probability is selected as a captured marker. Four directional signals and 10 alphabet features are defined and used as markers. A 98.87% recognition rate was achieved in the testing phase. The experimental results with the fiduciary marker show that the proposed method is a solution for an indoor AGV system.

  16. Ecology and geography of transmission of two bat-borne rabies lineages in Chile.

    PubMed

    Escobar, Luis E; Peterson, A Townsend; Favi, Myriam; Yung, Verónica; Pons, Daniel J; Medina-Vogel, Gonzalo

    2013-01-01

    Rabies was known to humans as a disease thousands of years ago. In America, insectivorous bats are natural reservoirs of rabies virus. The bat species Tadarida brasiliensis and Lasiurus cinereus, with their respective, host-specific rabies virus variants AgV4 and AgV6, are the principal rabies reservoirs in Chile. However, little is known about the roles of bat species in the ecology and geographic distribution of the virus. This contribution aims to address a series of questions regarding the ecology of rabies transmission in Chile. Analyzing records from 1985-2011 at the Instituto de Salud Pública de Chile (ISP) and using ecological niche modeling, we address these questions to help in understanding rabies-bat ecological dynamics in South America. We found ecological niche identity between both hosts and both viral variants, indicating that niches of all actors in the system are undifferentiated, although the viruses do not necessarily occupy the full geographic distributions of their hosts. Bat species and rabies viruses share similar niches, and our models had significant predictive power even across unsampled regions; results thus suggest that outbreaks may occur under consistent, stable, and predictable circumstances.

  17. Ecology and Geography of Transmission of Two Bat-Borne Rabies Lineages in Chile

    PubMed Central

    Escobar, Luis E.; Peterson, A. Townsend; Favi, Myriam; Yung, Verónica; Pons, Daniel J.; Medina-Vogel, Gonzalo

    2013-01-01

    Rabies was known to humans as a disease thousands of years ago. In America, insectivorous bats are natural reservoirs of rabies virus. The bat species Tadarida brasiliensis and Lasiurus cinereus, with their respective, host-specific rabies virus variants AgV4 and AgV6, are the principal rabies reservoirs in Chile. However, little is known about the roles of bat species in the ecology and geographic distribution of the virus. This contribution aims to address a series of questions regarding the ecology of rabies transmission in Chile. Analyzing records from 1985–2011 at the Instituto de Salud Pública de Chile (ISP) and using ecological niche modeling, we address these questions to help in understanding rabies-bat ecological dynamics in South America. We found ecological niche identity between both hosts and both viral variants, indicating that niches of all actors in the system are undifferentiated, although the viruses do not necessarily occupy the full geographic distributions of their hosts. Bat species and rabies viruses share similar niches, and our models had significant predictive power even across unsampled regions; results thus suggest that outbreaks may occur under consistent, stable, and predictable circumstances. PMID:24349592

  18. One of My Favorite Assignments: Automated Teller Machine Simulation.

    ERIC Educational Resources Information Center

    Oberman, Paul S.

    2001-01-01

    Describes an assignment for an introductory computer science class that requires the student to write a software program that simulates an automated teller machine. Highlights include an algorithm for the assignment; sample file contents; language features used; assignment variations; and discussion points. (LRW)

  19. Optimal processor assignment for pipeline computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  20. An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems

    PubMed Central

    Dawson, Kevin J.; Belkhir, Khalid

    2009-01-01

    Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306

  1. Application of a Dynamic Programming Algorithm for Weapon Target Assignment

    DTIC Science & Technology

    2016-02-01

    25] A . Turan , “Techniques for the Allocation of Resources Under Uncertainty,” Middle Eastern Technical University, Ankara, Turkey, 2012. [26] K...UNCLASSIFIED UNCLASSIFIED Application of a Dynamic Programming Algorithm for Weapon Target Assignment Lloyd Hammond Weapons and...optimisation techniques to support the decision making process. This report documents the methodology used to identify, develop and assess a

  2. The Limits of Soviet Airpower: The Bear Versus the Mujahideen in Afghanistan, 1979-1989

    DTIC Science & Technology

    1997-06-01

    satellite imagery identified Soviet TMS-65 decontamination vehicles and AGV-3 detox chambers in the vicinity of combat areas. In addition, the...Vladislav Tamarov, Afghanistan: Soviet Vietnam, trans. Naomi Marcus, Marianne Clarke Trangen, and Vladislav Tamarov (San Francisco: Mercury House...Tamarov. San Francisco: Mercury House, 1992. Turbiville, Graham. Ambush! The Road War in Afghanistan. Fort Leavenworth, KS: Soviet Army Studies Office

  3. Intraocular pressure control after the implantation of a second Ahmed glaucoma valve.

    PubMed

    Jiménez-Román, Jesús; Gil-Carrasco, Félix; Costa, Vital Paulino; Schimiti, Rui Barroso; Lerner, Fabián; Santana, Priscila Rezende; Vascocellos, Jose Paulo Cabral; Castillejos-Chévez, Armando; Turati, Mauricio; Fabre-Miranda, Karina

    2016-06-01

    The objective of this study is to evaluate the efficacy and safety of a second Ahmed glaucoma valve (AGV) in eyes with refractory glaucoma that had undergone prior Ahmed device implantation. This multicenter, retrospective study evaluated 58 eyes (58 patients) that underwent a second AGV (model S2-n = 50, model FP7-n = 8) due to uncontrolled IOP under maximal medical therapy. Outcome measures included IOP, visual acuity, number of glaucoma medications, and postoperative complications. Success was defined as IOP <21 mmHg (criterion 1) or 30 % reduction of IOP (criterion 2) with or without hypotensive medications. Persistent hypotony (IOP <5 mmHg after 3 months of follow-up), loss of light perception, and reintervention for IOP control were defined as failure. Mean preoperative IOP and mean IOPs at 12 and 30 months were 27.55 ± 1.16 mmHg (n = 58), 14.45 ± 0.83 mmHg (n = 42), and 14.81 ± 0.87 mmHg (n = 16), respectively. The mean numbers of glaucoma medications preoperatively at 12 and 30 months were 3.17 ± 0.16 (n = 58), 1.81 ± 0.2 (n = 42), and 1.83 ± 0.35 (n = 18), respectively. The reductions in mean IOP and number of medications were statistically significant at all time intervals (P < 0.001). According to criterion 1, Kaplan-Meier survival curves disclosed success rates of 62.9 % at 12 months and 56.6 % at 30 months. According to criterion 2, Kaplan-Meier survival curves disclosed success rates of 43.9 % at 12 months and 32.9 % at 30 months. The most frequent early complication was hypertensive phase (10.3 %) and the most frequent late complication was corneal edema (17.2 %). Second AGV implantation may effectively reduce IOP in eyes with uncontrolled glaucoma, and is associated with relatively few complications.

  4. Supra-Tenon Capsule Implantation of the Ahmed Glaucoma Valve in Refractory Pediatric Glaucoma.

    PubMed

    Elhefney, Eman M; Al-Sharkawy, Hossam T; Kishk, Hanem M

    2016-09-01

    To evaluate the efficacy of supra-Tenon capsule implantation of an Ahmed glaucoma valve (AGV) as a measure to decrease the fibrotic potential of the Tenon capsule on bleb formation and its subsequent effect on intraocular pressure (IOP) control in children with refractory glaucoma. Mansoura Ophthalmic Centre, Faculty of Medicine, Mansoura University, Egypt. A prospective interventional study. Twenty-two eyes of 12 children with refractory glaucoma underwent supra-Tenon capsule implantation of AGV. Ophthalmic examinations under general anesthesia including measurement of the corneal diameter and the IOP with Perkin's tonometer were performed preoperatively, on the first postoperative day, the first postoperative week, weekly for the first month, 2-weekly for the following 3 months, and monthly for at least 18 months. Postoperative complications and the number of glaucoma medications used preoperatively and postoperatively were recorded. The paired Student t test was used to compare preoperative and postoperative data. There were 12 eyes (54.6%) with refractory congenital glaucoma, 7 eyes (31.8%) with refractory pseudophakic glaucoma, and 3 eyes (13.6%) with refractory aphakic glaucoma. Patients included 10 male (83.3%) and 2 female (16.7%) children with a mean age of 16.3±9.7 months. The mean follow-up duration was 24.1±4.3 months. There was a statistically significant difference between the mean preoperative IOP (30.7±2.88 mm Hg) and the mean postoperative IOP (16.1±3.60 mm Hg) (t=16.22 and P=0.000, with a mean decrease in the IOP by 47.6%). The difference between the mean number of antiglaucoma medications before surgery (1.86±0.4) and after surgery (1.0±0.9) was also statistically significant (t=4.31 and P=0.000). Total success was achieved in 18 eyes (81.9%). Postoperative complications included tube exposure and slippage (10%), hypotony (10%), and hyphema (5%). Supra-Tenon capsule implantation of the AGV was successful in controlling the IOP with few postoperative complications in the management of children with refractory glaucoma.

  5. Medicaid beneficiaries in california reported less positive experiences when assigned to a managed care plan.

    PubMed

    McDonnell, Diana D; Graham, Carrie L

    2015-03-01

    In 2011 California began transitioning approximately 340,000 seniors and people with disabilities from Medicaid fee-for-service (FFS) to Medicaid managed care plans. When beneficiaries did not actively choose a managed care plan, the state assigned them to one using an algorithm based on their previous FFS primary and specialty care use. When no clear link could be established, beneficiaries were assigned by default to a managed care plan based on weighted randomization. In this article we report the results of a telephone survey of 1,521 seniors and people with disabilities enrolled in Medi-Cal (California Medicaid) and who were recently transitioned to a managed care plan. We found that 48 percent chose their own plan, 11 percent were assigned to a plan by algorithm, and 41 percent were assigned to a plan by default. People in the latter two categories reported being similarly less positive about their experiences compared to beneficiaries who actively chose a plan. Many states in addition to California are implementing mandatory transitions of Medicaid-only beneficiaries to managed care plans. Our results highlight the importance of encouraging beneficiaries to actively choose their health plan; when beneficiaries do not choose, states should employ robust intelligent assignment algorithms. Project HOPE—The People-to-People Health Foundation, Inc.

  6. An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming

    2017-02-01

    In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.

  7. Flexible Multi agent Algorithm for Distributed Decision Making

    DTIC Science & Technology

    2015-01-01

    How, J. P. Consensus - Based Auction Approaches for Decentralized task Assignment. Proceedings of the AIAA Guidance, Navigation, and Control...G. ; Kim, Y. Market- based Decentralized Task Assignment for Cooperative UA V Mission Including Rendezvous. Proceedings of the AIAA Guidance...scalable and adaptable to a variety of specific mission tasks . Additionally, the algorithm could easily be adapted for use on land or sea- based systems

  8. A circular median filter approach for resolving directional ambiguities in wind fields retrieved from spaceborne scatterometer data

    NASA Technical Reports Server (NTRS)

    Schultz, Howard

    1990-01-01

    The retrieval algorithm for spaceborne scatterometry proposed by Schultz (1985) is extended. A circular median filter (CMF) method is presented, which operates on wind directions independently of wind speed, removing any implicit wind speed dependence. A cell weighting scheme is included in the algorithm, permitting greater weights to be assigned to more reliable data. The mathematical properties of the ambiguous solutions to the wind retrieval problem are reviewed. The CMF algorithm is tested on twelve simulated data sets. The effects of spatially correlated likelihood assignment errors on the performance of the CMF algorithm are examined. Also, consideration is given to a wind field smoothing technique that uses a CMF.

  9. Algorithmic Coordination in Robotic Networks

    DTIC Science & Technology

    2010-11-29

    appropriate performance, robustness and scalability properties for various task allocation , surveillance, and information gathering applications is...networking, we envision designing and analyzing algorithms with appropriate performance, robustness and scalability properties for various task ...distributed algorithms for target assignments; based on the classic auction algorithms in static networks, we intend to design efficient algorithms in worst

  10. Adaptive Control Strategies for Flexible Robotic Arm

    NASA Technical Reports Server (NTRS)

    Bialasiewicz, Jan T.

    1996-01-01

    The control problem of a flexible robotic arm has been investigated. The control strategies that have been developed have a wide application in approaching the general control problem of flexible space structures. The following control strategies have been developed and evaluated: neural self-tuning control algorithm, neural-network-based fuzzy logic control algorithm, and adaptive pole assignment algorithm. All of the above algorithms have been tested through computer simulation. In addition, the hardware implementation of a computer control system that controls the tip position of a flexible arm clamped on a rigid hub mounted directly on the vertical shaft of a dc motor, has been developed. An adaptive pole assignment algorithm has been applied to suppress vibrations of the described physical model of flexible robotic arm and has been successfully tested using this testbed.

  11. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  12. Learning graph matching.

    PubMed

    Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J

    2009-06-01

    As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.

  13. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing.

    PubMed

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-10-23

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.

  14. Routing and spectrum assignment based on ant colony optimization of minimum consecutiveness loss in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Wang, Fu; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Tian, Qinghua; Zhang, Qi; Rao, Lan; Tian, Feng; Luo, Biao; Liu, Yingjun; Tang, Bao

    2016-10-01

    Elastic Optical Networks are considered to be a promising technology for future high-speed network. In this paper, we propose a RSA algorithm based on the ant colony optimization of minimum consecutiveness loss (ACO-MCL). Based on the effect of the spectrum consecutiveness loss on the pheromone in the ant colony optimization, the path and spectrum of the minimal impact on the network are selected for the service request. When an ant arrives at the destination node from the source node along a path, we assume that this path is selected for the request. We calculate the consecutiveness loss of candidate-neighbor link pairs along this path after the routing and spectrum assignment. Then, the networks update the pheromone according to the value of the consecutiveness loss. We save the path with the smallest value. After multiple iterations of the ant colony optimization, the final selection of the path is assigned for the request. The algorithms are simulated in different networks. The results show that ACO-MCL algorithm performs better in blocking probability and spectrum efficiency than other algorithms. Moreover, the ACO-MCL algorithm can effectively decrease spectrum fragmentation and enhance available spectrum consecutiveness. Compared with other algorithms, the ACO-MCL algorithm can reduce the blocking rate by at least 5.9% in heavy load.

  15. Procedural Tests for Anti-G Protective Devices. Volume II. G-Sensitivity Tests

    DTIC Science & Technology

    1979-12-01

    of these valves was used in only one type of aircraft--the ALAR AGV in ...pattern. 3) Total included, inexplicitly in the total for this column along with Failures au.d OTH/MAL’s are Type 6 HOW MALFUNCTION CODES--which...maintenance. Because Type 6 HOW MALFUNCTION CODESI. .were not considered pertinent to this investigation, they wer!. not included in the report. All figures of

  16. Endophthalmitis associated with the Ahmed glaucoma valve implant

    PubMed Central

    Al-Torbak, A A; Al-Shahwan, S; Al-Jadaan, I; Al-Hommadi, A; Edward, D P

    2005-01-01

    Aim: To investigate the rate, risk factors, clinical course, and treatment outcomes of endophthalmitis following glaucoma drainage implant (GDI) surgery. Methods: A computerised relational database search was conducted to identify all patients who were implanted with Ahmed glaucoma valve (AGV) and developed endophthalmitis following surgery at the King Khaled Eye Specialist Hospital in Riyadh, Saudi Arabia, between 1 January 1994 and 30 November 2003. Only medical records of the patients who developed endophthalmitis were retrospectively reviewed. Results: 542 eyes of 505 patients who were on active follow up were included in the study. Endophthalmitis developed in nine (1.7%) eyes; the rate was five times higher in children than in adults. Delayed endophthalmitis (developed 6 weeks after surgery) occurred in eight of nine eyes. Conjunctival erosion overlying the AGV tube was present in six of nine eyes. Common organisms isolated in the vitreous included Haemophilus influenzae and Streptococcus species. Multiple regression analysis revealed that younger age and conjunctival erosion over the tube were significant risk factors associated with endophthalmitis. Conclusion: Endophthalmitis is a rare complication of GDI surgery that appears to be more common in children. Conjunctival dehiscence over the GDI tube seems to represent a major risk factor for endophthalmitis. Prompt surgical revision of an exposed GDI tube is highly recommended. PMID:15774923

  17. Endophthalmitis associated with the Ahmed glaucoma valve implant.

    PubMed

    Al-Torbak, A A; Al-Shahwan, S; Al-Jadaan, I; Al-Hommadi, A; Edward, D P

    2005-04-01

    To investigate the rate, risk factors, clinical course, and treatment outcomes of endophthalmitis following glaucoma drainage implant (GDI) surgery. A computerised relational database search was conducted to identify all patients who were implanted with Ahmed glaucoma valve (AGV) and developed endophthalmitis following surgery at the King Khaled Eye Specialist Hospital in Riyadh, Saudi Arabia, between 1 January 1994 and 30 November 2003. Only medical records of the patients who developed endophthalmitis were retrospectively reviewed. 542 eyes of 505 patients who were on active follow up were included in the study. Endophthalmitis developed in nine (1.7%) eyes; the rate was five times higher in children than in adults. Delayed endophthalmitis (developed 6 weeks after surgery) occurred in eight of nine eyes. Conjunctival erosion overlying the AGV tube was present in six of nine eyes. Common organisms isolated in the vitreous included Haemophilus influenzae and Streptococcus species. Multiple regression analysis revealed that younger age and conjunctival erosion over the tube were significant risk factors associated with endophthalmitis. Endophthalmitis is a rare complication of GDI surgery that appears to be more common in children. Conjunctival dehiscence over the GDI tube seems to represent a major risk factor for endophthalmitis. Prompt surgical revision of an exposed GDI tube is highly recommended.

  18. Interactive visual exploration and refinement of cluster assignments.

    PubMed

    Kern, Michael; Lex, Alexander; Gehlenborg, Nils; Johnson, Chris R

    2017-09-12

    With ever-increasing amounts of data produced in biology research, scientists are in need of efficient data analysis methods. Cluster analysis, combined with visualization of the results, is one such method that can be used to make sense of large data volumes. At the same time, cluster analysis is known to be imperfect and depends on the choice of algorithms, parameters, and distance measures. Most clustering algorithms don't properly account for ambiguity in the source data, as records are often assigned to discrete clusters, even if an assignment is unclear. While there are metrics and visualization techniques that allow analysts to compare clusterings or to judge cluster quality, there is no comprehensive method that allows analysts to evaluate, compare, and refine cluster assignments based on the source data, derived scores, and contextual data. In this paper, we introduce a method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments. Our methods are applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. Furthermore, we enable analysts to explore clustering results in context of other data, for example, to observe whether a clustering of genomic data results in a meaningful differentiation in phenotypes. Our methods are integrated into Caleydo StratomeX, a popular, web-based, disease subtype analysis tool. We show in a usage scenario that our approach can reveal ambiguities in cluster assignments and produce improved clusterings that better differentiate genotypes and phenotypes.

  19. Voltage scheduling for low power/energy

    NASA Astrophysics Data System (ADS)

    Manzak, Ali

    2001-07-01

    Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.

  20. Comparison of four machine learning algorithms for their applicability in satellite-based optical rainfall retrievals

    NASA Astrophysics Data System (ADS)

    Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas

    2016-03-01

    Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.

  1. PLA realizations for VLSI state machines

    NASA Technical Reports Server (NTRS)

    Gopalakrishnan, S.; Whitaker, S.; Maki, G.; Liu, K.

    1990-01-01

    A major problem associated with state assignment procedures for VLSI controllers is obtaining an assignment that produces minimal or near minimal logic. The key item in Programmable Logic Array (PLA) area minimization is the number of unique product terms required by the design equations. This paper presents a state assignment algorithm for minimizing the number of product terms required to implement a finite state machine using a PLA. Partition algebra with predecessor state information is used to derive a near optimal state assignment. A maximum bound on the number of product terms required can be obtained by inspecting the predecessor state information. The state assignment algorithm presented is much simpler than existing procedures and leads to the same number of product terms or less. An area-efficient PLA structure implemented in a 1.0 micron CMOS process is presented along with a summary of the performance for a controller implemented using this design procedure.

  2. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing

    PubMed Central

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-01-01

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation. PMID:26512650

  3. Single-machine common/slack due window assignment problems with linear decreasing processing times

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia

    2017-08-01

    This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.

  4. The West Midlands breast cancer screening status algorithm - methodology and use as an audit tool.

    PubMed

    Lawrence, Gill; Kearins, Olive; O'Sullivan, Emma; Tappenden, Nancy; Wallis, Matthew; Walton, Jackie

    2005-01-01

    To illustrate the ability of the West Midlands breast screening status algorithm to assign a screening status to women with malignant breast cancer, and its uses as a quality assurance and audit tool. Breast cancers diagnosed between the introduction of the National Health Service [NHS] Breast Screening Programme and 31 March 2001 were obtained from the West Midlands Cancer Intelligence Unit (WMCIU). Screen-detected tumours were identified via breast screening units, and the remaining cancers were assigned to one of eight screening status categories. Multiple primaries and recurrences were excluded. A screening status was assigned to 14,680 women (96% of the cohort examined), 110 cancers were not registered at the WMCIU and the cohort included 120 screen-detected recurrences. The West Midlands breast screening status algorithm is a robust simple tool which can be used to derive data to evaluate the efficacy and impact of the NHS Breast Screening Programme.

  5. DNABIT Compress – Genome compression algorithm

    PubMed Central

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  6. Reinforcement learning in scheduling

    NASA Technical Reports Server (NTRS)

    Dietterich, Tom G.; Ok, Dokyeong; Zhang, Wei; Tadepalli, Prasad

    1994-01-01

    The goal of this research is to apply reinforcement learning methods to real-world problems like scheduling. In this preliminary paper, we show that learning to solve scheduling problems such as the Space Shuttle Payload Processing and the Automatic Guided Vehicle (AGV) scheduling can be usefully studied in the reinforcement learning framework. We discuss some of the special challenges posed by the scheduling domain to these methods and propose some possible solutions we plan to implement.

  7. Dynamic routing and spectrum assignment based on multilayer virtual topology and ant colony optimization in elastic software-defined optical networks

    NASA Astrophysics Data System (ADS)

    Wang, Fu; Liu, Bo; Zhang, Lijia; Zhang, Qi; Tian, Qinghua; Tian, Feng; Rao, Lan; Xin, Xiangjun

    2017-07-01

    Elastic software-defined optical networks greatly improve the flexibility of the optical switching network while it has brought challenges to the routing and spectrum assignment (RSA). A multilayer virtual topology model is proposed to solve RSA problems. Two RSA algorithms based on the virtual topology are proposed, which are the ant colony optimization (ACO) algorithm of minimum consecutiveness loss and the ACO algorithm of maximum spectrum consecutiveness. Due to the computing power of the control layer in the software-defined network, the routing algorithm avoids the frequent link-state information between routers. Based on the effect of the spectrum consecutiveness loss on the pheromone in the ACO, the path and spectrum of the minimal impact on the network are selected for the service request. The proposed algorithms have been compared with other algorithms. The results show that the proposed algorithms can reduce the blocking rate by at least 5% and perform better in spectrum efficiency. Moreover, the proposed algorithms can effectively decrease spectrum fragmentation and enhance available spectrum consecutiveness.

  8. Algorithms for selecting informative marker panels for population assignment.

    PubMed

    Rosenberg, Noah A

    2005-11-01

    Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.

  9. An efficient randomized algorithm for contact-based NMR backbone resonance assignment.

    PubMed

    Kamisetty, Hetunandan; Bailey-Kellogg, Chris; Pandurangan, Gopal

    2006-01-15

    Backbone resonance assignment is a critical bottleneck in studies of protein structure, dynamics and interactions by nuclear magnetic resonance (NMR) spectroscopy. A minimalist approach to assignment, which we call 'contact-based', seeks to dramatically reduce experimental time and expense by replacing the standard suite of through-bond experiments with the through-space (nuclear Overhauser enhancement spectroscopy, NOESY) experiment. In the contact-based approach, spectral data are represented in a graph with vertices for putative residues (of unknown relation to the primary sequence) and edges for hypothesized NOESY interactions, such that observed spectral peaks could be explained if the residues were 'close enough'. Due to experimental ambiguity, several incorrect edges can be hypothesized for each spectral peak. An assignment is derived by identifying consistent patterns of edges (e.g. for alpha-helices and beta-sheets) within a graph and by mapping the vertices to the primary sequence. The key algorithmic challenge is to be able to uncover these patterns even when they are obscured by significant noise. This paper develops, analyzes and applies a novel algorithm for the identification of polytopes representing consistent patterns of edges in a corrupted NOESY graph. Our randomized algorithm aggregates simplices into polytopes and fixes inconsistencies with simple local modifications, called rotations, that maintain most of the structure already uncovered. In characterizing the effects of experimental noise, we employ an NMR-specific random graph model in proving that our algorithm gives optimal performance in expected polynomial time, even when the input graph is significantly corrupted. We confirm this analysis in simulation studies with graphs corrupted by up to 500% noise. Finally, we demonstrate the practical application of the algorithm on several experimental beta-sheet datasets. Our approach is able to eliminate a large majority of noise edges and to uncover large consistent sets of interactions. Our algorithm has been implemented in the platform-independent Python code. The software can be freely obtained for academic use by request from the authors.

  10. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    PubMed

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.

  11. Normalized Cut Algorithm for Automated Assignment of Protein Domains

    NASA Technical Reports Server (NTRS)

    Samanta, M. P.; Liang, S.; Zha, H.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present a novel computational method for automatic assignment of protein domains from structural data. At the core of our algorithm lies a recently proposed clustering technique that has been very successful for image-partitioning applications. This grap.,l-theory based clustering method uses the notion of a normalized cut to partition. an undirected graph into its strongly-connected components. Computer implementation of our method tested on the standard comparison set of proteins from the literature shows a high success rate (84%), better than most existing alternative In addition, several other features of our algorithm, such as reliance on few adjustable parameters, linear run-time with respect to the size of the protein and reduced complexity compared to other graph-theory based algorithms, would make it an attractive tool for structural biologists.

  12. Algorithmic Case Pedagogy, Learning and Gender

    ERIC Educational Resources Information Center

    Bromley, Robert; Huang, Zhenyu

    2015-01-01

    Great investment has been made in developing algorithmically-based cases within online homework management systems. This has been done because publishers are convinced that textbook adoption decisions are influenced by the incorporation of these systems within their products. These algorithmic assignments are thought to promote learning while…

  13. On marker-based parentage verification via non-linear optimization.

    PubMed

    Boerner, Vinzent

    2017-06-15

    Parentage verification by molecular markers is mainly based on short tandem repeat markers. Single nucleotide polymorphisms (SNPs) as bi-allelic markers have become the markers of choice for genotyping projects. Thus, the subsequent step is to use SNP genotypes for parentage verification as well. Recent developments of algorithms such as evaluating opposing homozygous SNP genotypes have drawbacks, for example the inability of rejecting all animals of a sample of potential parents. This paper describes an algorithm for parentage verification by constrained regression which overcomes the latter limitation and proves to be very fast and accurate even when the number of SNPs is as low as 50. The algorithm was tested on a sample of 14,816 animals with 50, 100 and 500 SNP genotypes randomly selected from 40k genotypes. The samples of putative parents of these animals contained either five random animals, or four random animals and the true sire. Parentage assignment was performed by ranking of regression coefficients, or by setting a minimum threshold for regression coefficients. The assignment quality was evaluated by the power of assignment (P[Formula: see text]) and the power of exclusion (P[Formula: see text]). If the sample of putative parents contained the true sire and parentage was assigned by coefficient ranking, P[Formula: see text] and P[Formula: see text] were both higher than 0.99 for the 500 and 100 SNP genotypes, and higher than 0.98 for the 50 SNP genotypes. When parentage was assigned by a coefficient threshold, P[Formula: see text] was higher than 0.99 regardless of the number of SNPs, but P[Formula: see text] decreased from 0.99 (500 SNPs) to 0.97 (100 SNPs) and 0.92 (50 SNPs). If the sample of putative parents did not contain the true sire and parentage was rejected using a coefficient threshold, the algorithm achieved a P[Formula: see text] of 1 (500 SNPs), 0.99 (100 SNPs) and 0.97 (50 SNPs). The algorithm described here is easy to implement, fast and accurate, and is able to assign parentage using genomic marker data with a size as low as 50 SNPs.

  14. Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network.

    PubMed

    Choi, Sangil; Park, Jong Hyuk

    2016-12-02

    Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM.

  15. Two New Tools for Glycopeptide Analysis Researchers: A Glycopeptide Decoy Generator and a Large Data Set of Assigned CID Spectra of Glycopeptides.

    PubMed

    Lakbub, Jude C; Su, Xiaomeng; Zhu, Zhikai; Patabandige, Milani W; Hua, David; Go, Eden P; Desaire, Heather

    2017-08-04

    The glycopeptide analysis field is tightly constrained by a lack of effective tools that translate mass spectrometry data into meaningful chemical information, and perhaps the most challenging aspect of building effective glycopeptide analysis software is designing an accurate scoring algorithm for MS/MS data. We provide the glycoproteomics community with two tools to address this challenge. The first tool, a curated set of 100 expert-assigned CID spectra of glycopeptides, contains a diverse set of spectra from a variety of glycan types; the second tool, Glycopeptide Decoy Generator, is a new software application that generates glycopeptide decoys de novo. We developed these tools so that emerging methods of assigning glycopeptides' CID spectra could be rigorously tested. Software developers or those interested in developing skills in expert (manual) analysis can use these tools to facilitate their work. We demonstrate the tools' utility in assessing the quality of one particular glycopeptide software package, GlycoPep Grader, which assigns glycopeptides to CID spectra. We first acquired the set of 100 expert assigned CID spectra; then, we used the Decoy Generator (described herein) to generate 20 decoys per target glycopeptide. The assigned spectra and decoys were used to test the accuracy of GlycoPep Grader's scoring algorithm; new strengths and weaknesses were identified in the algorithm using this approach. Both newly developed tools are freely available. The software can be downloaded at http://glycopro.chem.ku.edu/GPJ.jar.

  16. Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network

    PubMed Central

    Choi, Sangil; Park, Jong Hyuk

    2016-01-01

    Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM. PMID:27918438

  17. UAVs Task and Motion Planning in the Presence of Obstacles and Prioritized Targets

    PubMed Central

    Gottlieb, Yoav; Shima, Tal

    2015-01-01

    The intertwined task assignment and motion planning problem of assigning a team of fixed-winged unmanned aerial vehicles to a set of prioritized targets in an environment with obstacles is addressed. It is assumed that the targets’ locations and initial priorities are determined using a network of unattended ground sensors used to detect potential threats at restricted zones. The targets are characterized by a time-varying level of importance, and timing constraints must be fulfilled before a vehicle is allowed to visit a specific target. It is assumed that the vehicles are carrying body-fixed sensors and, thus, are required to approach a designated target while flying straight and level. The fixed-winged aerial vehicles are modeled as Dubins vehicles, i.e., having a constant speed and a minimum turning radius constraint. The investigated integrated problem of task assignment and motion planning is posed in the form of a decision tree, and two search algorithms are proposed: an exhaustive algorithm that improves over run time and provides the minimum cost solution, encoded in the tree, and a greedy algorithm that provides a quick feasible solution. To satisfy the target’s visitation timing constraint, a path elongation motion planning algorithm amidst obstacles is provided. Using simulations, the performance of the algorithms is compared, evaluated and exemplified. PMID:26610522

  18. A novel flexible microfluidic meshwork to reduce fibrosis in glaucoma surgery.

    PubMed

    Amoozgar, Behzad; Wei, Xiaoling; Hui Lee, Jun; Bloomer, Michele; Zhao, Zhengtuo; Coh, Paul; He, Fei; Luan, Lan; Xie, Chong; Han, Ying

    2017-01-01

    Fibrosis and hence capsule formation around the glaucoma implants are the main reasons for glaucoma implant failure. To address these issues, we designed a microfluidic meshwork and tested its biocompatibility in a rabbit eye model. The amount of fibrosis elicited by the microfluidic meshwork was compared to the amount elicited by the plate of conventional glaucoma drainage device. Six eyes from 3 New Zealand albino rabbits were randomized to receive either the novel microfluidic meshwork or a plate of Ahmed glaucoma valve model PF7 (AGV PF7). The flexible microfluidic implant was made from negative photoresist SU-8 by using micro-fabrication techniques. The overall size of the meshwork was 7 mm × 7 mm with a grid period of 100 μm. Both implants were placed in the subtenon space at the supratemporal quadrant in a standard fashion. There was no communication between the implants and the anterior chamber via a tube. All animal eyes were examined for signs of infection and implant erosion on days 1, 3, 7, and 14 and then monthly. Exenterations were performed in which the entire orbital contents were removed at 3 months. Histology slides of the implant and the surrounding tissues were prepared and stained with hematoxylin-eosin. Thickness of the fibrous capsules beneath the implants were measured and compared with paired student's t-test between the two groups. The gross histological sections showed that nearly no capsule formed around the microfluidic meshwork in contrast to the thick capsule formed around the plate of AGV PF7. Thickness of the fibrotic capsules beneath the AGV PF7 plate from the 3 rabbit eyes was 90μm, 82μm, and 95 μm, respectively. The thickness at the bottom of fibrotic capsules around the new microfluidic implant were 1μm, 2μm, and 1μm, respectively. The difference in thickness of capsule between the two groups was significant (P = 0.002). No complications were noticed in the 6 eyes, and both implants were tolerated well by all rabbits. The microfluidic meshwork elicited minimal fibrosis and capsule formation after 3-months implantation in a rabbit model. This provides promising evidence to aid in future development of a new glaucoma drainage implant that will elicit minimal scar formation and provide better long-term surgical outcomes.

  19. Absolute Points for Multiple Assignment Problems

    ERIC Educational Resources Information Center

    Adlakha, V.; Kowalski, K.

    2006-01-01

    An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…

  20. Faster than classical quantum algorithm for dense formulas of exact satisfiability and occupation problems

    NASA Astrophysics Data System (ADS)

    Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán

    2016-07-01

    We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.

  1. Comparison of neural network applications for channel assignment in cellular TDMA networks and dynamically sectored PCS networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    1997-04-01

    The use of artificial neural networks (NNs) to address the channel assignment problem (CAP) for cellular time-division multiple access and code-division multiple access networks has previously been investigated by this author and many others. The investigations to date have been based on a hexagonal cell structure established by omnidirectional antennas at the base stations. No account was taken of the use of spatial isolation enabled by directional antennas to reduce interference between mobiles. Any reduction in interference translates into increased capacity and consequently alters the performance of the NNs. Previous studies have sought to improve the performance of Hopfield- Tank network algorithms and self-organizing feature map algorithms applied primarily to static channel assignment (SCA) for cellular networks that handle uniformly distributed, stationary traffic in each cell for a single type of service. The resulting algorithms minimize energy functions representing interference constraint and ad hoc conditions that promote convergence to optimal solutions. While the structures of the derived neural network algorithms (NNAs) offer the potential advantages of inherent parallelism and adaptability to changing system conditions, this potential has yet to be fulfilled the CAP for emerging mobile networks. The next-generation communication infrastructures must accommodate dynamic operating conditions. Macrocell topologies are being refined to microcells and picocells that can be dynamically sectored by adaptively controlled, directional antennas and programmable transceivers. These networks must support the time-varying demands for personal communication services (PCS) that simultaneously carry voice, data and video and, thus, require new dynamic channel assignment (DCA) algorithms. This paper examines the impact of dynamic cell sectoring and geometric conditioning on NNAs developed for SCA in omnicell networks with stationary traffic to improve the metrics of convergence rate and call blocking. Genetic algorithms (GAs) are also considered in PCS networks as a means to overcome the known weakness of Hopfield NNAs in determining global minima. The resulting GAs for DCA in PCS networks are compared to improved DCA algorithms based on Hopfield NNs for stationary cellular networks. Algorithm performance is compared on the basis of rate of convergence, blocking probability, analytic complexity, and parametric sensitivity to transient traffic demands and channel interference.

  2. Strategic Control Algorithm Development : Volume 1. Summary.

    DOT National Transportation Integrated Search

    1974-08-01

    Strategic control is an air traffic management concept wherein a central control authority determines, and assigns to each participating airplane, a conflict-free, four-dimensional route-time profile. The route-time profile assignments are long term ...

  3. Rule-based support system for multiple UMLS semantic type assignments

    PubMed Central

    Geller, James; He, Zhe; Perl, Yehoshua; Morrey, C. Paul; Xu, Julia

    2012-01-01

    Background When new concepts are inserted into the UMLS, they are assigned one or several semantic types from the UMLS Semantic Network by the UMLS editors. However, not every combination of semantic types is permissible. It was observed that many concepts with rare combinations of semantic types have erroneous semantic type assignments or prohibited combinations of semantic types. The correction of such errors is resource-intensive. Objective We design a computational system to inform UMLS editors as to whether a specific combination of two, three, four, or five semantic types is permissible or prohibited or questionable. Methods We identify a set of inclusion and exclusion instructions in the UMLS Semantic Network documentation and derive corresponding rule-categories as well as rule-categories from the UMLS concept content. We then design an algorithm adviseEditor based on these rule-categories. The algorithm specifies rules for an editor how to proceed when considering a tuple (pair, triple, quadruple, quintuple) of semantic types to be assigned to a concept. Results Eight rule-categories were identified. A Web-based system was developed to implement the adviseEditor algorithm, which returns for an input combination of semantic types whether it is permitted, prohibited or (in a few cases) requires more research. The numbers of semantic type pairs assigned to each rule-category are reported. Interesting examples for each rule-category are illustrated. Cases of semantic type assignments that contradict rules are listed, including recently introduced ones. Conclusion The adviseEditor system implements explicit and implicit knowledge available in the UMLS in a system that informs UMLS editors about the permissibility of a desired combination of semantic types. Using adviseEditor might help accelerate the work of the UMLS editors and prevent erroneous semantic type assignments. PMID:23041716

  4. Developing an eco-routing application.

    DOT National Transportation Integrated Search

    2014-01-01

    The study develops eco-routing algorithms and investigates and quantifies the system-wide impacts of implementing an eco-routing system. Two eco-routing algorithms are developed: one based on vehicle sub-populations (ECO-Subpopulation Feedback Assign...

  5. A Depth Map Generation Algorithm Based on Saliency Detection for 2D to 3D Conversion

    NASA Astrophysics Data System (ADS)

    Yang, Yizhong; Hu, Xionglou; Wu, Nengju; Wang, Pengfei; Xu, Dong; Rong, Shen

    2017-09-01

    In recent years, 3D movies attract people's attention more and more because of their immersive stereoscopic experience. However, 3D movies is still insufficient, so estimating depth information for 2D to 3D conversion from a video is more and more important. In this paper, we present a novel algorithm to estimate depth information from a video via scene classification algorithm. In order to obtain perceptually reliable depth information for viewers, the algorithm classifies them into three categories: landscape type, close-up type, linear perspective type firstly. Then we employ a specific algorithm to divide the landscape type image into many blocks, and assign depth value by similar relative height cue with the image. As to the close-up type image, a saliency-based method is adopted to enhance the foreground in the image and the method combine it with the global depth gradient to generate final depth map. By vanishing line detection, the calculated vanishing point which is regarded as the farthest point to the viewer is assigned with deepest depth value. According to the distance between the other points and the vanishing point, the entire image is assigned with corresponding depth value. Finally, depth image-based rendering is employed to generate stereoscopic virtual views after bilateral filter. Experiments show that the proposed algorithm can achieve realistic 3D effects and yield satisfactory results, while the perception scores of anaglyph images lie between 6.8 and 7.8.

  6. Crystal Identification in Dual-Layer-Offset DOI-PET Detectors Using Stratified Peak Tracking Based on SVD and Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Wei, Qingyang; Dai, Tiantian; Ma, Tianyu; Liu, Yaqiang; Gu, Yu

    2016-10-01

    An Anger-logic based pixelated PET detector block requires a crystal position map (CPM) to assign the position of each detected event to a most probable crystal index. Accurate assignments are crucial to PET imaging performance. In this paper, we present a novel automatic approach to generate the CPMs for dual-layer offset (DLO) PET detectors using a stratified peak tracking method. In which, the top and bottom layers are distinguished by their intensity difference and the peaks of the top and bottom layers are tracked based on a singular value decomposition (SVD) and mean-shift algorithm in succession. The CPM is created by classifying each pixel to its nearest peak and assigning the pixel with the crystal index of that peak. A Matlab-based graphical user interface program was developed including the automatic algorithm and a manual interaction procedure. The algorithm was tested for three DLO PET detector blocks. Results show that the proposed method exhibits good performance as well as robustness for all the three blocks. Compared to the existing methods, our approach can directly distinguish the layer and crystal indices using the information of intensity and offset grid pattern.

  7. Solidification Structure Synthesis in Undercooled Liquids

    DTIC Science & Technology

    1993-10-18

    Diagrams 32 1. Supersaturation in Sn-Sb Alloys 32 2. Microstructural Transitions in Fe-Ni alloys 33 E. Droplet Nucleation Kinetics 36 F . Controlled...indicated as - F ,,. The curves correspond to the extrapolation of experimental data (exp.). (b) to the approximation of AGv and TAS=-O by Dubey and...Thermodynamic stability of oxide particles in Sn Oxide A Grxn1 A Grxn2 A F (XmOn) (kJ) (kJ) (°C) AI20 3 +1605 +810 144 TiO- +367 +372 142 Y203 +2119 +1067 136

  8. Wavelength converter placement for different RWA algorithms in wavelength-routed all-optical networks

    NASA Astrophysics Data System (ADS)

    Chu, Xiaowen; Li, Bo; Chlamtac, Imrich

    2002-07-01

    Sparse wavelength conversion and appropriate routing and wavelength assignment (RWA) algorithms are the two key factors in improving the blocking performance in wavelength-routed all-optical networks. It has been shown that the optimal placement of a limited number of wavelength converters in an arbitrary mesh network is an NP complete problem. There have been various heuristic algorithms proposed in the literature, in which most of them assume that a static routing and random wavelength assignment RWA algorithm is employed. However, the existing work shows that fixed-alternate routing and dynamic routing RWA algorithms can achieve much better blocking performance. Our study in this paper further demonstrates that the wavelength converter placement and RWA algorithms are closely related in the sense that a well designed wavelength converter placement mechanism for a particular RWA algorithm might not work well with a different RWA algorithm. Therefore, the wavelength converter placement and the RWA have to be considered jointly. The objective of this paper is to investigate the wavelength converter placement problem under fixed-alternate routing algorithm and least-loaded routing algorithm. Under the fixed-alternate routing algorithm, we propose a heuristic algorithm called Minimum Blocking Probability First (MBPF) algorithm for wavelength converter placement. Under the least-loaded routing algorithm, we propose a heuristic converter placement algorithm called Weighted Maximum Segment Length (WMSL) algorithm. The objective of the converter placement algorithm is to minimize the overall blocking probability. Extensive simulation studies have been carried out over three typical mesh networks, including the 14-node NSFNET, 19-node EON and 38-node CTNET. We observe that the proposed algorithms not only outperform existing wavelength converter placement algorithms by a large margin, but they also can achieve almost the same performance comparing with full wavelength conversion under the same RWA algorithm.

  9. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework

    PubMed Central

    Zhang, Xuejun; Lei, Jiaxing

    2015-01-01

    Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840

  10. Administrative Data Algorithms Can Describe Ambulatory Physician Utilization

    PubMed Central

    Shah, Baiju R; Hux, Janet E; Laupacis, Andreas; Zinman, Bernard; Cauch-Dudek, Karen; Booth, Gillian L

    2007-01-01

    Objective To validate algorithms using administrative data that characterize ambulatory physician care for patients with a chronic disease. Data Sources Seven-hundred and eighty-one people with diabetes were recruited mostly from community pharmacies to complete a written questionnaire about their physician utilization in 2002. These data were linked with administrative databases detailing health service utilization. Study Design An administrative data algorithm was defined that identified whether or not patients received specialist care, and it was tested for agreement with self-report. Other algorithms, which assigned each patient to a primary care and specialist physician, were tested for concordance with self-reported regular providers of care. Principal Findings The algorithm to identify whether participants received specialist care had 80.4 percent agreement with questionnaire responses (κ = 0.59). Compared with self-report, administrative data had a sensitivity of 68.9 percent and specificity 88.3 percent for identifying specialist care. The best administrative data algorithm to assign each participant's regular primary care and specialist providers was concordant with self-report in 82.6 and 78.2 percent of cases, respectively. Conclusions Administrative data algorithms can accurately match self-reported ambulatory physician utilization. PMID:17610448

  11. Two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images.

    PubMed

    He, Lifeng; Chao, Yuyan; Suzuki, Kenji

    2011-08-01

    Whenever one wants to distinguish, recognize, and/or measure objects (connected components) in binary images, labeling is required. This paper presents two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images. One is voxel based and the other is run based. For the voxel-based one, we present an efficient method of deciding the order for checking voxels in the mask. For the run-based one, instead of assigning each foreground voxel, we assign each run a provisional label. Moreover, we use run data to label foreground voxels without scanning any background voxel in the second scan. Experimental results have demonstrated that our voxel-based algorithm is efficient for 3-D binary images with complicated connected components, that our run-based one is efficient for those with simple connected components, and that both are much more efficient than conventional 3-D labeling algorithms.

  12. Detecting non-orthology in the COGs database and other approaches grouping orthologs using genome-specific best hits.

    PubMed

    Dessimoz, Christophe; Boeckmann, Brigitte; Roth, Alexander C J; Gonnet, Gaston H

    2006-01-01

    Correct orthology assignment is a critical prerequisite of numerous comparative genomics procedures, such as function prediction, construction of phylogenetic species trees and genome rearrangement analysis. We present an algorithm for the detection of non-orthologs that arise by mistake in current orthology classification methods based on genome-specific best hits, such as the COGs database. The algorithm works with pairwise distance estimates, rather than computationally expensive and error-prone tree-building methods. The accuracy of the algorithm is evaluated through verification of the distribution of predicted cases, case-by-case phylogenetic analysis and comparisons with predictions from other projects using independent methods. Our results show that a very significant fraction of the COG groups include non-orthologs: using conservative parameters, the algorithm detects non-orthology in a third of all COG groups. Consequently, sequence analysis sensitive to correct orthology assignments will greatly benefit from these findings.

  13. Energy-Efficient Routing and Spectrum Assignment Algorithm with Physical-Layer Impairments Constraint in Flexible Optical Networks

    NASA Astrophysics Data System (ADS)

    Zhao, Jijun; Zhang, Nawa; Ren, Danping; Hu, Jinhua

    2017-12-01

    The recently proposed flexible optical network can provide more efficient accommodation of multiple data rates than the current wavelength-routed optical networks. Meanwhile, the energy efficiency has also been a hot topic because of the serious energy consumption problem. In this paper, the energy efficiency problem of flexible optical networks with physical-layer impairments constraint is studied. We propose a combined impairment-aware and energy-efficient routing and spectrum assignment (RSA) algorithm based on the link availability, in which the impact of power consumption minimization on signal quality is considered. By applying the proposed algorithm, the connection requests are established on a subset of network topology, reducing the number of transitions from sleep to active state. The simulation results demonstrate that our proposed algorithm can improve the energy efficiency and spectrum resources utilization with the acceptable blocking probability and average delay.

  14. Relabeling exchange method (REM) for learning in neural networks

    NASA Astrophysics Data System (ADS)

    Wu, Wen; Mammone, Richard J.

    1994-02-01

    The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.

  15. Array distribution in data-parallel programs

    NASA Technical Reports Server (NTRS)

    Chatterjee, Siddhartha; Gilbert, John R.; Schreiber, Robert; Sheffler, Thomas J.

    1994-01-01

    We consider distribution at compile time of the array data in a distributed-memory implementation of a data-parallel program written in a language like Fortran 90. We allow dynamic redistribution of data and define a heuristic algorithmic framework that chooses distribution parameters to minimize an estimate of program completion time. We represent the program as an alignment-distribution graph. We propose a divide-and-conquer algorithm for distribution that initially assigns a common distribution to each node of the graph and successively refines this assignment, taking computation, realignment, and redistribution costs into account. We explain how to estimate the effect of distribution on computation cost and how to choose a candidate set of distributions. We present the results of an implementation of our algorithms on several test problems.

  16. Formularity: Software for Automated Formula Assignment of Natural and Other Organic Matter from Ultrahigh-Resolution Mass Spectra.

    PubMed

    Tolić, Nikola; Liu, Yina; Liyu, Andrey; Shen, Yufeng; Tfaily, Malak M; Kujawinski, Elizabeth B; Longnecker, Krista; Kuo, Li-Jung; Robinson, Errol W; Paša-Tolić, Ljiljana; Hess, Nancy J

    2017-12-05

    Ultrahigh resolution mass spectrometry, such as Fourier transform ion cyclotron resonance mass spectrometry (FT ICR MS), can resolve thousands of molecular ions in complex organic matrices. A Compound Identification Algorithm (CIA) was previously developed for automated elemental formula assignment for natural organic matter (NOM). In this work, we describe software Formularity with a user-friendly interface for CIA function and newly developed search function Isotopic Pattern Algorithm (IPA). While CIA assigns elemental formulas for compounds containing C, H, O, N, S, and P, IPA is capable of assigning formulas for compounds containing other elements. We used halogenated organic compounds (HOC), a chemical class that is ubiquitous in nature as well as anthropogenic systems, as an example to demonstrate the capability of Formularity with IPA. A HOC standard mix was used to evaluate the identification confidence of IPA. Tap water and HOC spike in Suwannee River NOM were used to assess HOC identification in complex environmental samples. Strategies for reconciliation of CIA and IPA assignments were discussed. Software and sample databases with documentation are freely available.

  17. Taboo search algorithm for item assignment in synchronized zone automated order picking system

    NASA Astrophysics Data System (ADS)

    Wu, Yingying; Wu, Yaohua

    2014-07-01

    The idle time which is part of the order fulfillment time is decided by the number of items in the zone; therefore the item assignment method affects the picking efficiency. Whereas previous studies only focus on the balance of number of kinds of items between different zones but not the number of items and the idle time in each zone. In this paper, an idle factor is proposed to measure the idle time exactly. The idle factor is proven to obey the same vary trend with the idle time, so the object of this problem can be simplified from minimizing idle time to minimizing idle factor. Based on this, the model of item assignment problem in synchronized zone automated order picking system is built. The model is a form of relaxation of parallel machine scheduling problem which had been proven to be NP-complete. To solve the model, a taboo search algorithm is proposed. The main idea of the algorithm is minimizing the greatest idle factor of zones with the 2-exchange algorithm. Finally, the simulation which applies the data collected from a tobacco distribution center is conducted to evaluate the performance of the algorithm. The result verifies the model and shows the algorithm can do a steady work to reduce idle time and the idle time can be reduced by 45.63% on average. This research proposed an approach to measure the idle time in synchronized zone automated order picking system. The approach can improve the picking efficiency significantly and can be seen as theoretical basis when optimizing the synchronized automated order picking systems.

  18. Predictive Cache Modeling and Analysis

    DTIC Science & Technology

    2011-11-01

    metaheuristic /bin-packing algorithm to optimize task placement based on task communication characterization. Our previous work on task allocation showed...Cache Miss Minimization Technology To efficiently explore combinations and discover nearly-optimal task-assignment algorithms , we extended to our...it was possible to use our algorithmic techniques to decrease network bandwidth consumption by ~25%. In this effort, we adapted these existing

  19. Guidance and control of swarms of spacecraft

    NASA Astrophysics Data System (ADS)

    Morgan, Daniel James

    There has been considerable interest in formation flying spacecraft due to their potential to perform certain tasks at a cheaper cost than monolithic spacecraft. Formation flying enables the use of smaller, cheaper spacecraft that distribute the risk of the mission. Recently, the ideas of formation flying have been extended to spacecraft swarms made up of hundreds to thousands of 100-gram-class spacecraft known as femtosatellites. The large number of spacecraft and limited capabilities of each individual spacecraft present a significant challenge in guidance, navigation, and control. This dissertation deals with the guidance and control algorithms required to enable the flight of spacecraft swarms. The algorithms developed in this dissertation are focused on achieving two main goals: swarm keeping and swarm reconfiguration. The objectives of swarm keeping are to maintain bounded relative distances between spacecraft, prevent collisions between spacecraft, and minimize the propellant used by each spacecraft. Swarm reconfiguration requires the transfer of the swarm to a specific shape. Like with swarm keeping, minimizing the propellant used and preventing collisions are the main objectives. Additionally, the algorithms required for swarm keeping and swarm reconfiguration should be decentralized with respect to communication and computation so that they can be implemented on femtosats, which have limited hardware capabilities. The algorithms developed in this dissertation are concerned with swarms located in low Earth orbit. In these orbits, Earth oblateness and atmospheric drag have a significant effect on the relative motion of the swarm. The complicated dynamic environment of low Earth orbits further complicates the swarm-keeping and swarm-reconfiguration problems. To better develop and test these algorithms, a nonlinear, relative dynamic model with J2 and drag perturbations is developed. This model is used throughout this dissertation to validate the algorithms using computer simulations. The swarm-keeping problem can be solved by placing the spacecraft on J2-invariant relative orbits, which prevent collisions and minimize the drift of the swarm over hundreds of orbits using a single burn. These orbits are achieved by energy matching the spacecraft to the reference orbit. Additionally, these conditions can be repeatedly applied to minimize the drift of the swarm when atmospheric drag has a large effect (orbits with an altitude under 500 km). The swarm reconfiguration is achieved using two steps: trajectory optimization and assignment. The trajectory optimization problem can be written as a nonlinear, optimal control problem. This optimal control problem is discretized, decoupled, and convexified so that the individual femtosats can efficiently solve the optimization. Sequential convex programming is used to generate the control sequences and trajectories required to safely and efficiently transfer a spacecraft from one position to another. The sequence of trajectories is shown to converge to a Karush-Kuhn-Tucker point of the nonconvex problem. In the case where many of the spacecraft are interchangeable, a variable-swarm, distributed auction algorithm is used to determine the assignment of spacecraft to target positions. This auction algorithm requires only local communication and all of the bidding parameters are stored locally. The assignment generated using this auction algorithm is shown to be near optimal and to converge in a finite number of bids. Additionally, the bidding process is used to modify the number of targets used in the assignment so that the reconfiguration can be achieved even when there is a disconnected communication network or a significant loss of agents. Once the assignment is achieved, the trajectory optimization can be run using the terminal positions determined by the auction algorithm. To implement these algorithms in real time a model predictive control formulation is used. Model predictive control uses a finite horizon to apply the most up-to-date control sequence while simultaneously calculating a new assignment and trajectory based on updated state information. Using a finite horizon allows collisions to only be considered between spacecraft that are near each other at the current time. This relaxes the all-to-all communication assumption so that only neighboring agents need to communicate. Experimental validation is done using the formation flying testbed. The swarm-reconfiguration algorithms are tested using multiple quadrotors. Experiments have been performed using sequential convex programming for offline trajectory planning, model predictive control and sequential convex programming for real-time trajectory generation, and the variable-swarm, distributed auction algorithm for optimal assignment. These experiments show that the swarm-reconfiguration algorithms can be implemented in real time using actual hardware. In general, this dissertation presents guidance and control algorithms that maintain and reconfigure swarms of spacecraft while maintaining the shape of the swarm, preventing collisions between the spacecraft, and minimizing the amount of propellant used.

  20. Extending DFT-based genetic algorithms by atom-to-place re-assignment via perturbation theory: A systematic and unbiased approach to structures of mixed-metallic clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weigend, Florian, E-mail: florian.weigend@kit.edu

    2014-10-07

    Energy surfaces of metal clusters usually show a large variety of local minima. For homo-metallic species the energetically lowest can be found reliably with genetic algorithms, in combination with density functional theory without system-specific parameters. For mixed-metallic clusters this is much more difficult, as for a given arrangement of nuclei one has to find additionally the best of many possibilities of assigning different metal types to the individual positions. In the framework of electronic structure methods this second issue is treatable at comparably low cost at least for elements with similar atomic number by means of first-order perturbation theory, asmore » shown previously [F. Weigend, C. Schrodt, and R. Ahlrichs, J. Chem. Phys. 121, 10380 (2004)]. In the present contribution the extension of a genetic algorithm with the re-assignment of atom types to atom sites is proposed and tested for the search of the global minima of PtHf{sub 12} and [LaPb{sub 7}Bi{sub 7}]{sup 4−}. For both cases the (putative) global minimum is reliably found with the extended technique, which is not the case for the “pure” genetic algorithm.« less

  1. Quantum annealing for combinatorial clustering

    NASA Astrophysics Data System (ADS)

    Kumar, Vaibhaw; Bass, Gideon; Tomlin, Casey; Dulny, Joseph

    2018-02-01

    Clustering is a powerful machine learning technique that groups "similar" data points based on their characteristics. Many clustering algorithms work by approximating the minimization of an objective function, namely the sum of within-the-cluster distances between points. The straightforward approach involves examining all the possible assignments of points to each of the clusters. This approach guarantees the solution will be a global minimum; however, the number of possible assignments scales quickly with the number of data points and becomes computationally intractable even for very small datasets. In order to circumvent this issue, cost function minima are found using popular local search-based heuristic approaches such as k-means and hierarchical clustering. Due to their greedy nature, such techniques do not guarantee that a global minimum will be found and can lead to sub-optimal clustering assignments. Other classes of global search-based techniques, such as simulated annealing, tabu search, and genetic algorithms, may offer better quality results but can be too time-consuming to implement. In this work, we describe how quantum annealing can be used to carry out clustering. We map the clustering objective to a quadratic binary optimization problem and discuss two clustering algorithms which are then implemented on commercially available quantum annealing hardware, as well as on a purely classical solver "qbsolv." The first algorithm assigns N data points to K clusters, and the second one can be used to perform binary clustering in a hierarchical manner. We present our results in the form of benchmarks against well-known k-means clustering and discuss the advantages and disadvantages of the proposed techniques.

  2. Efficient Bifacial Semitransparent Perovskite Solar Cells Using Ag/V2O5 as Transparent Anodes.

    PubMed

    Pang, Shangzheng; Li, Xueyi; Dong, Hang; Chen, Dazheng; Zhu, Weidong; Chang, Jingjing; Lin, Zhenhua; Xi, He; Zhang, Jincheng; Zhang, Chunfu; Hao, Yue

    2018-04-18

    Bifacial semitransparent inverted planar structured perovskite solar cells (PSCs) based on Cs 0.05 FA 0.3 MA 0.7 PbI 2.51 Br 0.54 using an Ag thin film electrode and V 2 O 5 optical coupling layer are investigated theoretically and experimentally. It is shown that the introduction of the cesium (Cs) ions in the perovskite could obviously improve the device performance and stability. When only the bare Ag film electrode is used, the PSCs show a bifacial performance with the power conversion efficiency (PCE) of 14.62% illuminated from the indium tin oxide (ITO) side and 5.45% from the Ag film side. By introducing a V 2 O 5 optical coupling layer, the PCE is enhanced to 8.91% illuminated from the Ag film side, which is 63% improvement compared with the bare Ag film electrode, whereas the PCE illuminated from the ITO side remains almost unchanged. Moreover, when a back-reflector is employed, the PCE of device could be further improved to 15.39% by illumination from the ITO side and 12.44% by illumination from the Ag side. The devices also show superior semitransparent properties and exhibit negligible photocurrent hysteresis, irrespective of the side from which the light is illuminated. In short, the Ag/V 2 O 5 double layer is a promising semitransparent electrode due to its low cost and simple preparation process, which also point to a new direction for the bifacial PSCs and tandem solar cells.

  3. Control and Diagnosis in Integrated Product Development - Observations during the Development of an AGV

    NASA Astrophysics Data System (ADS)

    Stetter, R.; Simundsson, A.

    2015-11-01

    This paper is concerned with the integration of control and diagnosis functionalities into the development of complete systems which include mechanical, electrical and electronic subsystems. For the development of such systems the strategies, methods and tools of integrated product development have attracted significant attention during the last decades. Today, it is generally observed that product development processes of complex systems can only be successful if the activities in the different domains are well connected and synchronised and if an ongoing communication is present - an ongoing communication spanning the technical domains and also including functions such as production planning, marketing/distribution, quality assurance, service and project planning. Obviously, numerous approaches to tackle this challenge are present in scientific literature and in industrial practice, as well. Today, the functionality and safety of most products is to a large degree dependent on control and diagnosis functionalities. Still, there is comparatively little research concentrating on the integration of the development of these functionalities into the overall product development processes. The main source of insight of the presented research is the product development process of an Automated Guided Vehicle (AGV) which is intended to be used on rough terrain. The paper starts with a background describing Integrated Product Development. The second section deals with the product development of the sample product. The third part summarizes some insights and formulates first hypotheses concerning control and diagnosis in Integrated Product Development.

  4. Fuzzy logic control of an AGV

    NASA Astrophysics Data System (ADS)

    Kelkar, Nikhal; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The controller incorporates a fuzzy logic approach for steering and speed control, a neuro-fuzzy approach for ultrasound sensing (not discussed in this paper) and an overall expert system. The advantages of a modular system are related to portability and transportability, i.e. any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors. The speed and steering fuzzy logic controller is supervised by a 486 computer through a multi-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. This micro- controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system in which high speed computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected by a vision tracking device that transmits the X, Y coordinates of the lane marker to the control computer. Simulation and testing of these systems yielded promising results. This design, in its modularity, creates a portable autonomous fuzzy logic controller applicable to any mobile vehicle with only minor adaptations.

  5. An analysis of spectral envelope-reduction via quadratic assignment problems

    NASA Technical Reports Server (NTRS)

    George, Alan; Pothen, Alex

    1994-01-01

    A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper, we provide an analysis of the spectral envelope reduction algorithm. We described related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work involved in an envelope Cholesky factorization scheme. We formulate the latter two problems as quadratic assignment problems, and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a projected quadratic assignment problem, and then show that finding a permutation matrix closest to an orthogonal matrix attaining one of the lower bounds justifies the spectral envelope reduction algorithm. The lower bound on the 2-sum is seen to be tight for reasonably 'uniform' finite element meshes. We also obtain asymptotically tight lower bounds for the envelope size for certain classes of meshes.

  6. Ant colony optimization for solving university facility layout problem

    NASA Astrophysics Data System (ADS)

    Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin

    2013-04-01

    Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).

  7. On Channel-Discontinuity-Constraint Routing in Wireless Networks☆

    PubMed Central

    Sankararaman, Swaminathan; Efrat, Alon; Ramasubramanian, Srinivasan; Agarwal, Pankaj K.

    2011-01-01

    Multi-channel wireless networks are increasingly deployed as infrastructure networks, e.g. in metro areas. Network nodes frequently employ directional antennas to improve spatial throughput. In such networks, between two nodes, it is of interest to compute a path with a channel assignment for the links such that the path and link bandwidths are the same. This is achieved when any two consecutive links are assigned different channels, termed as “Channel-Discontinuity-Constraint” (CDC). CDC-paths are also useful in TDMA systems, where, preferably, consecutive links are assigned different time-slots. In the first part of this paper, we develop a t-spanner for CDC-paths using spatial properties; a sub-network containing O(n/θ) links, for any θ > 0, such that CDC-paths increase in cost by at most a factor t = (1−2 sin (θ/2))−2. We propose a novel distributed algorithm to compute the spanner using an expected number of O(n log n) fixed-size messages. In the second part, we present a distributed algorithm to find minimum-cost CDC-paths between two nodes using O(n2) fixed-size messages, by developing an extension of Edmonds’ algorithm for minimum-cost perfect matching. In a centralized implementation, our algorithm runs in O(n2) time improving the previous best algorithm which requires O(n3) running time. Moreover, this running time improves to O(n/θ) when used in conjunction with the spanner developed. PMID:24443646

  8. Tuning and performance evaluation of PID controller for superheater steam temperature control of 200 MW boiler using gain phase assignment algorithm

    NASA Astrophysics Data System (ADS)

    Begum, A. Yasmine; Gireesh, N.

    2018-04-01

    In superheater, steam temperature is controlled in a cascade control loop. The cascade control loop consists of PI and PID controllers. To improve the superheater steam temperature control the controller's gains in a cascade control loop has to be tuned efficiently. The mathematical model of the superheater is derived by sets of nonlinear partial differential equations. The tuning methods taken for study here are designed for delay plus first order transfer function model. Hence from the dynamical model of the superheater, a FOPTD model is derived using frequency response method. Then by using Chien-Hrones-Reswick Tuning Algorithm and Gain-Phase Assignment Algorithm optimum controller gains has been found out based on the least value of integral time weighted absolute error.

  9. TH-CD-206-07: Determination of Patient-Specific Myocardial Mass at Risk Using Computed Tomography Angiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hubbard, L; Ziemer, B; Malkasian, S

    Purpose: To evaluate the accuracy of a patient-specific coronary perfusion territory assignment algorithm that uses CT angiography (CTA) and a minimum-cost-path approach to assign coronary perfusion territories on a voxel-by-voxel basis for determination of myocardial mass at risk. Methods: Intravenous (IV) contrast (370 mg/mL iodine, 25 mL, 7 mL/s) was injected centrally into five swine (35–45 kg) and CTA was performed using a 320-slice CT scanner at 100 kVp and 200 mA. Additionally, a 4F catheter was advanced into the left anterior descending (LAD), left circumflex (LCX), and right coronary artery (RCA) and contrast (30 mg/mL iodine, 10 mL, 1.5more » mL/s) was directly injected into each coronary artery for isolation of reference coronary perfusion territories. Semiautomatic myocardial segmentation of the CTA data was then performed and the centerlines of the LAD, LCX, and RCA were digitally extracted through image processing. Individual coronary perfusion territories were then assigned using a minimum-cost-path approach, and were quantitatively compared to the reference coronary perfusion territories. Results: The results of the coronary perfusion territory assignment algorithm were in good agreement with the reference coronary perfusion territories. The average volumetric assignment error from mitral orifice to apex was 5.5 ± 1.1%, corresponding to 2.1 ± 0.7 grams of myocardial mass misassigned for each coronary perfusion territory. Conclusion: The results indicate that accurate coronary perfusion territory assignment is possible on a voxel-by-voxel basis using CTA data and an assignment algorithm based on a minimum-cost-path approach. Thus, the technique can potentially be used to accurately determine patient-specific myocardial mass at risk distal to a coronary stenosis, improving coronary lesion assessment and treatment. Conflict of Interest (only if applicable): Grant funding from Toshiba America Medical Systems.« less

  10. A SAT Based Effective Algorithm for the Directed Hamiltonian Cycle Problem

    NASA Astrophysics Data System (ADS)

    Jäger, Gerold; Zhang, Weixiong

    The Hamiltonian cycle problem (HCP) is an important combinatorial problem with applications in many areas. While thorough theoretical and experimental analyses have been made on the HCP in undirected graphs, little is known for the HCP in directed graphs (DHCP). The contribution of this work is an effective algorithm for the DHCP. Our algorithm explores and exploits the close relationship between the DHCP and the Assignment Problem (AP) and utilizes a technique based on Boolean satisfiability (SAT). By combining effective algorithms for the AP and SAT, our algorithm significantly outperforms previous exact DHCP algorithms including an algorithm based on the award-winning Concorde TSP algorithm.

  11. A learning approach to the bandwidth multicolouring problem

    NASA Astrophysics Data System (ADS)

    Akbari Torkestani, Javad

    2016-05-01

    In this article, a generalisation of the vertex colouring problem known as bandwidth multicolouring problem (BMCP), in which a set of colours is assigned to each vertex such that the difference between the colours, assigned to each vertex and its neighbours, is by no means less than a predefined threshold, is considered. It is shown that the proposed method can be applied to solve the bandwidth colouring problem (BCP) as well. BMCP is known to be NP-hard in graph theory, and so a large number of approximation solutions, as well as exact algorithms, have been proposed to solve it. In this article, two learning automata-based approximation algorithms are proposed for estimating a near-optimal solution to the BMCP. We show, for the first proposed algorithm, that by choosing a proper learning rate, the algorithm finds the optimal solution with a probability close enough to unity. Moreover, we compute the worst-case time complexity of the first algorithm for finding a 1/(1-ɛ) optimal solution to the given problem. The main advantage of this method is that a trade-off between the running time of algorithm and the colour set size (colouring optimality) can be made, by a proper choice of the learning rate also. Finally, it is shown that the running time of the proposed algorithm is independent of the graph size, and so it is a scalable algorithm for large graphs. The second proposed algorithm is compared with some well-known colouring algorithms and the results show the efficiency of the proposed algorithm in terms of the colour set size and running time of algorithm.

  12. Intermediary Variables and Algorithm Parameters for an Electronic Algorithm for Intravenous Insulin Infusion

    PubMed Central

    Braithwaite, Susan S.; Godara, Hemant; Song, Julie; Cairns, Bruce A.; Jones, Samuel W.; Umpierrez, Guillermo E.

    2009-01-01

    Background Algorithms for intravenous insulin infusion may assign the infusion rate (IR) by a two-step process. First, the previous insulin infusion rate (IRprevious) and the rate of change of blood glucose (BG) from the previous iteration of the algorithm are used to estimate the maintenance rate (MR) of insulin infusion. Second, the insulin IR for the next iteration (IRnext) is assigned to be commensurate with the MR and the distance of the current blood glucose (BGcurrent) from target. With use of a specific set of algorithm parameter values, a family of iso-MR curves is created, each giving IR as a function of MR and BG. Method To test the feasibility of estimating MR from the IRprevious and the previous rate of change of BG, historical hyperglycemic data points were used to compute the “maintenance rate cross step next estimate” (MRcsne). Historical cases had been treated with intravenous insulin infusion using a tabular protocol that estimated MR according to column-change rules. The mean IR on historical stable intervals (MRtrue), an estimate of the biologic value of MR, was compared to MRcsne during the hyperglycemic iteration immediately preceding the stable interval. Hypothetically calculated MRcsne-dependent IRnext was compared to IRnext assigned historically. An expanded theory of an algorithm is developed mathematically. Practical recommendations for computerization are proposed. Results The MRtrue determined on each of 30 stable intervals and the MRcsne during the immediately preceding hyperglycemic iteration differed, having medians with interquartile ranges 2.7 (1.2–3.7) and 3.2 (1.5–4.6) units/h, respectively. However, these estimates of MR were strongly correlated (R2 = 0.88). During hyperglycemia at 941 time points the IRnext assigned historically and the hypothetically calculated MRcsne-dependent IRnext differed, having medians with interquartile ranges 4.0 (3.0–6.0) and 4.6 (3.0–6.8) units/h, respectively, but these paired values again were correlated (R2 = 0.87). This article describes a programmable algorithm for intravenous insulin infusion. The fundamental equation of the algorithm gives the relationship among IR; the biologic parameter MR; and two variables expressing an instantaneous rate of change of BG, one of which must be zero at any given point in time and the other positive, negative, or zero, namely the rate of change of BG from below target (rate of ascent) and the rate of change of BG from above target (rate of descent). In addition to user-definable parameters, three special algorithm parameters discoverable in nature are described: the maximum rate of the spontaneous ascent of blood glucose during nonhypoglycemia, the glucose per daily dose of insulin exogenously mediated, and the MR at given patient time points. User-assignable parameters will facilitate adaptation to different patient populations. Conclusions An algorithm is described that estimates MR prior to the attainment of euglycemia and computes MR-dependent values for IRnext. Design features address glycemic variability, promote safety with respect to hypoglycemia, and define a method for specifying glycemic targets that are allowed to differ according to patient condition. PMID:20144334

  13. Post-processing techniques to enhance reliability of assignment algorithm based performance measures : [technical summary].

    DOT National Transportation Integrated Search

    2011-01-01

    Travel demand modeling plays a key role in the transportation system planning and evaluation process. The four-step sequential travel demand model is the most widely used technique in practice. Traffic assignment is the key step in the conventional f...

  14. Generalised Assignment Matrix Methodology in Linear Programming

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2012-01-01

    Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…

  15. Algorithm Analysis of the DSM-5 Alcohol Withdrawal Symptom.

    PubMed

    Martin, Christopher S; Vergés, Alvaro; Langenbucher, James W; Littlefield, Andrew; Chung, Tammy; Clark, Duncan B; Sher, Kenneth J

    2018-06-01

    Alcohol withdrawal (AW) is an important clinical and diagnostic feature of alcohol dependence. AW has been found to predict a worsened course of illness in clinical samples, but in some community studies, AW endorsement rates are strikingly high, suggesting false-positive symptom assignments. Little research has examined the validity of the DSM-5 algorithm for AW, which requires either the presence of at least 2 of 8 subcriteria (i.e., autonomic hyperactivity, tremulousness, insomnia, nausea, hallucinations, psychomotor agitation, anxiety, and grand mal seizures), or, the use of alcohol to avoid or relieve these symptoms. We used item and algorithm analyses of data from waves 1 and 2 of the National Epidemiologic Survey on Alcohol and Related Conditions (current drinkers, n = 26,946 at wave 1) to study the validity of DSM-5 AW as operationalized by the Alcohol Use Disorder and Associated Disabilities Interview Schedule-DSM-IV (AUDADIS-IV). A substantial proportion of individuals given the AW symptom reported only modest to moderate levels of alcohol use and alcohol problems. Alternative AW algorithms were superior to DSM-5 in terms of levels of alcohol use and alcohol problem severity among those with AW, group difference effect sizes, and predictive validity at a 3-year follow-up. The superior alternative algorithms included those that excluded the nausea subcriterion; required withdrawal-related distress or impairment; increased the AW subcriteria threshold from 2 to 3 items; and required tremulousness for AW symptom assignment. The results indicate that the DSM-5 definition of AW, as assessed by the AUDADIS-IV, has low specificity. This shortcoming can be addressed by making the algorithm for symptom assignment more stringent. Copyright © 2018 by the Research Society on Alcoholism.

  16. Path Planning Algorithms for the Adaptive Sensor Fleet

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Hosler, Jeff

    2005-01-01

    The Adaptive Sensor Fleet (ASF) is a general purpose fleet management and planning system being developed by NASA in coordination with NOAA. The current mission of ASF is to provide the capability for autonomous cooperative survey and sampling of dynamic oceanographic phenomena such as current systems and algae blooms. Each ASF vessel is a software model that represents a real world platform that carries a variety of sensors. The OASIS platform will provide the first physical vessel, outfitted with the systems and payloads necessary to execute the oceanographic observations described in this paper. The ASF architecture is being designed for extensibility to accommodate heterogenous fleet elements, and is not limited to using the OASIS platform to acquire data. This paper describes the path planning algorithms developed for the acquisition phase of a typical ASF task. Given a polygonal target region to be surveyed, the region is subdivided according to the number of vessels in the fleet. The subdivision algorithm seeks a solution in which all subregions have equal area and minimum mean radius. Once the subregions are defined, a dynamic programming method is used to find a minimum-time path for each vessel from its initial position to its assigned region. This path plan includes the effects of water currents as well as avoidance of known obstacles. A fleet-level planning algorithm then shuffles the individual vessel assignments to find the overall solution which puts all vessels in their assigned regions in the minimum time. This shuffle algorithm may be described as a process of elimination on the sorted list of permutations of a cost matrix. All these path planning algorithms are facilitated by discretizing the region of interest onto a hexagonal tiling.

  17. An ensemble approach to protein fold classification by integration of template-based assignment and support vector machine classifier.

    PubMed

    Xia, Jiaqi; Peng, Zhenling; Qi, Dawei; Mu, Hongbo; Yang, Jianyi

    2017-03-15

    Protein fold classification is a critical step in protein structure prediction. There are two possible ways to classify protein folds. One is through template-based fold assignment and the other is ab-initio prediction using machine learning algorithms. Combination of both solutions to improve the prediction accuracy was never explored before. We developed two algorithms, HH-fold and SVM-fold for protein fold classification. HH-fold is a template-based fold assignment algorithm using the HHsearch program. SVM-fold is a support vector machine-based ab-initio classification algorithm, in which a comprehensive set of features are extracted from three complementary sequence profiles. These two algorithms are then combined, resulting to the ensemble approach TA-fold. We performed a comprehensive assessment for the proposed methods by comparing with ab-initio methods and template-based threading methods on six benchmark datasets. An accuracy of 0.799 was achieved by TA-fold on the DD dataset that consists of proteins from 27 folds. This represents improvement of 5.4-11.7% over ab-initio methods. After updating this dataset to include more proteins in the same folds, the accuracy increased to 0.971. In addition, TA-fold achieved >0.9 accuracy on a large dataset consisting of 6451 proteins from 184 folds. Experiments on the LE dataset show that TA-fold consistently outperforms other threading methods at the family, superfamily and fold levels. The success of TA-fold is attributed to the combination of template-based fold assignment and ab-initio classification using features from complementary sequence profiles that contain rich evolution information. http://yanglab.nankai.edu.cn/TA-fold/. yangjy@nankai.edu.cn or mhb-506@163.com. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  18. Proceedings of the Annual Academic Apparel Research Conference on Advanced Apparel Manufacturing Technology Demonstration (1st) Held in Philadelphia, Pennsylvania on 14-16 February 1990. Volume 2

    DTIC Science & Technology

    1990-02-16

    Philadelphia, PA by Dr. Leo E. Hanifin, Director Center for Manufacturing Productivity and Technology Transfer and Co-Principal Investigator Background In...Is coordinated by Dr. Leo E. Hanifin and Involves an additional four graduate students, two programmers, one engineer and one technician. In addition...the transfer bit5 - Whether the transfer is a load or unload * 4 bit4 - Which side of the AGV to perform the transfer bit3 through bitO - The number of

  19. High capacity low delay packet broadcasting multiaccess schemes for satellite repeater systems

    NASA Astrophysics Data System (ADS)

    Bose, S. K.

    1980-12-01

    Demand assigned packet radio schemes using satellite repeaters can achieve high capacities but often exhibit relatively large delays under low traffic conditions when compared to random access. Several schemes which improve delay performance at low traffic but which have high capacity are presented and analyzed. These schemes allow random acess attempts by users, who are waiting for channel assignments. The performance of these are considered in the context of a multiple point communication system carrying fixed length messages between geographically distributed (ground) user terminals which are linked via a satellite repeater. Channel assignments are done following a BCC queueing discipline by a (ground) central controller on the basis of requests correctly received over a collision type access channel. In TBACR Scheme A, some of the forward message channels are set aside for random access transmissions; the rest are used in a demand assigned mode. Schemes B and C operate all their forward message channels in a demand assignment mode but, by means of appropriate algorithms for trailer channel selection, allow random access attempts on unassigned channels. The latter scheme also introduces framing and slotting of the time axis to implement a more efficient algorithm for trailer channel selection than the former.

  20. Formularity: Software for Automated Formula Assignment of Natural and Other Organic Matter from Ultrahigh-Resolution Mass Spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolić, Nikola; Liu, Yina; Liyu, Andrey

    Ultrahigh-resolution mass spectrometry, such as Fourier transform ion-cyclotron resonance mass spectrometry (FT-ICR MS), can resolve thousands of molecular ions in complex organic matrices. A Compound Identification Algorithm (CIA) was previously developed for automated elemental formula assignment for natural organic matter (NOM). In this work we describe a user friendly interface for CIA, titled Formularity, which includes an additional functionality to perform search of formulas based on an Isotopic Pattern Algorithm (IPA). While CIA assigns elemental formulas for compounds containing C, H, O, N, S, and P, IPA is capable of assigning formulas for compounds containing other elements. We used halogenatedmore » organic compounds (HOC), a chemical class that is ubiquitous in nature as well as anthropogenic systems, as an example to demonstrate the capability of Formularity with IPA. A HOC standard mix was used to evaluate the identification confidence of IPA. The HOC spike in NOM and tap water were used to assess HOC identification in natural and anthropogenic matrices. Strategies for reconciliation of CIA and IPA assignments are discussed. Software and sample databases with documentation are freely available from the PNNL OMICS software repository https://omics.pnl.gov/software/formularity.« less

  1. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem

    PubMed Central

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585

  2. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem.

    PubMed

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.

  3. Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks

    NASA Astrophysics Data System (ADS)

    Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli

    2018-01-01

    In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.

  4. School Mathematics Study Group, Unit Number Two. Chapter 3 - Informal Algorithms and Flow Charts. Chapter 4 - Applications and Mathematics Models.

    ERIC Educational Resources Information Center

    Stanford Univ., CA. School Mathematics Study Group.

    This is the second unit of a 15-unit School Mathematics Study Group (SMSG) mathematics text for high school students. Topics presented in the first chapter (Informal Algorithms and Flow Charts) include: changing a flat tire; algorithms, flow charts, and computers; assignment and variables; input and output; using a variable as a counter; decisions…

  5. The practical evaluation of DNA barcode efficacy.

    PubMed

    Spouge, John L; Mariño-Ramírez, Leonardo

    2012-01-01

    This chapter describes a workflow for measuring the efficacy of a barcode in identifying species. First, assemble individual sequence databases corresponding to each barcode marker. A controlled collection of taxonomic data is preferable to GenBank data, because GenBank data can be problematic, particularly when comparing barcodes based on more than one marker. To ensure proper controls when evaluating species identification, specimens not having a sequence in every marker database should be discarded. Second, select a computer algorithm for assigning species to barcode sequences. No algorithm has yet improved notably on assigning a specimen to the species of its nearest neighbor within a barcode database. Because global sequence alignments (e.g., with the Needleman-Wunsch algorithm, or some related algorithm) examine entire barcode sequences, they generally produce better species assignments than local sequence alignments (e.g., with BLAST). No neighboring method (e.g., global sequence similarity, global sequence distance, or evolutionary distance based on a global alignment) has yet shown a notable superiority in identifying species. Finally, "the probability of correct identification" (PCI) provides an appropriate measurement of barcode efficacy. The overall PCI for a data set is the average of the species PCIs, taken over all species in the data set. This chapter states explicitly how to calculate PCI, how to estimate its statistical sampling error, and how to use data on PCR failure to set limits on how much improvements in PCR technology can improve species identification.

  6. NVR-BIP: Nuclear Vector Replacement using Binary Integer Programming for NMR Structure-Based Assignments.

    PubMed

    Apaydin, Mehmet Serkan; Çatay, Bülent; Patrick, Nicholas; Donald, Bruce R

    2011-05-01

    Nuclear magnetic resonance (NMR) spectroscopy is an important experimental technique that allows one to study protein structure and dynamics in solution. An important bottleneck in NMR protein structure determination is the assignment of NMR peaks to the corresponding nuclei. Structure-based assignment (SBA) aims to solve this problem with the help of a template protein which is homologous to the target and has applications in the study of structure-activity relationship, protein-protein and protein-ligand interactions. We formulate SBA as a linear assignment problem with additional nuclear overhauser effect constraints, which can be solved within nuclear vector replacement's (NVR) framework (Langmead, C., Yan, A., Lilien, R., Wang, L. and Donald, B. (2003) A Polynomial-Time Nuclear Vector Replacement Algorithm for Automated NMR Resonance Assignments. Proc. the 7th Annual Int. Conf. Research in Computational Molecular Biology (RECOMB) , Berlin, Germany, April 10-13, pp. 176-187. ACM Press, New York, NY. J. Comp. Bio. , (2004), 11, pp. 277-298; Langmead, C. and Donald, B. (2004) An expectation/maximization nuclear vector replacement algorithm for automated NMR resonance assignments. J. Biomol. NMR , 29, 111-138). Our approach uses NVR's scoring function and data types and also gives the option of using CH and NH residual dipolar coupling (RDCs), instead of NH RDCs which NVR requires. We test our technique on NVR's data set as well as on four new proteins. Our results are comparable to NVR's assignment accuracy on NVR's test set, but higher on novel proteins. Our approach allows partial assignments. It is also complete and can return the optimum as well as near-optimum assignments. Furthermore, it allows us to analyze the information content of each data type and is easily extendable to accept new forms of input data, such as additional RDCs.

  7. Robust vector quantization for noisy channels

    NASA Technical Reports Server (NTRS)

    Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.

    1988-01-01

    The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.

  8. Super-channel oriented routing, spectrum and core assignment under crosstalk limit in spatial division multiplexing elastic optical networks

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Zhu, Ye; Wang, Chunhui; Yu, Xiaosong; Liu, Chuan; Liu, Binglin; Zhang, Jie

    2017-07-01

    With the capacity increasing in optical networks enabled by spatial division multiplexing (SDM) technology, spatial division multiplexing elastic optical networks (SDM-EONs) attract much attention from both academic and industry. Super-channel is an important type of service provisioning in SDM-EONs. This paper focuses on the issue of super-channel construction in SDM-EONs. Mixed super-channel oriented routing, spectrum and core assignment (MS-RSCA) algorithm is proposed in SDM-EONs considering inter-core crosstalk. Simulation results show that MS-RSCA can improve spectrum resource utilization and reduce blocking probability significantly compared with the baseline RSCA algorithms.

  9. A group-based tasks allocation algorithm for the optimization of long leave opportunities in academic departments

    NASA Astrophysics Data System (ADS)

    Eyono Obono, S. D.; Basak, Sujit Kumar

    2011-12-01

    The general formulation of the assignment problem consists in the optimal allocation of a given set of tasks to a workforce. This problem is covered by existing literature for different domains such as distributed databases, distributed systems, transportation, packets radio networks, IT outsourcing, and teaching allocation. This paper presents a new version of the assignment problem for the allocation of academic tasks to staff members in departments with long leave opportunities. It presents the description of a workload allocation scheme and its algorithm, for the allocation of an equitable number of tasks in academic departments where long leaves are necessary.

  10. Investigation of correlation classification techniques

    NASA Technical Reports Server (NTRS)

    Haskell, R. E.

    1975-01-01

    A two-step classification algorithm for processing multispectral scanner data was developed and tested. The first step is a single pass clustering algorithm that assigns each pixel, based on its spectral signature, to a particular cluster. The output of that step is a cluster tape in which a single integer is associated with each pixel. The cluster tape is used as the input to the second step, where ground truth information is used to classify each cluster using an iterative method of potentials. Once the clusters have been assigned to classes the cluster tape is read pixel-by-pixel and an output tape is produced in which each pixel is assigned to its proper class. In addition to the digital classification programs, a method of using correlation clustering to process multispectral scanner data in real time by means of an interactive color video display is also described.

  11. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2012-01-01

    A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.

  12. Assigning categorical information to Japanese medical terms using MeSH and MEDLINE.

    PubMed

    Onogi, Yuzo

    2007-01-01

    This paper reports on the assigning of MeSH (Medical Subject Headings) categories to Japanese terms in an English-Japanese dictionary using the titles and abstracts of articles indexed in MEDLINE. In a previous study, 30,000 of 80,000 terms in the dictionary were mapped to MeSH terms by normalized comparison. It was reasoned that if the remaining dictionary terms appeared in MEDLINE-indexed articles that are indexed using MeSH terms, then relevancies between the dictionary terms and MeSH terms could be calculated, and thus MeSH categories assigned. This study compares two approaches for calculating the weight matrix. One is the TF*IDF method and the other uses the inner product of two weight matrices. About 20,000 additional dictionary terms were identified in MEDLINE-indexed articles published between 2000 and 2004. The precision and recall of these algorithms were evaluated separately for MeSH terms and non-MeSH terms. Unfortunately, the precision and recall of the algorithms was not good, but this method will help with manual assignment of MeSH categories to dictionary terms.

  13. A demand assignment control in international business satellite communications network

    NASA Astrophysics Data System (ADS)

    Nohara, Mitsuo; Takeuchi, Yoshio; Takahata, Fumio; Hirata, Yasuo

    An experimental system is being developed for use in an international business satellite (IBS) communications network based on demand-assignment (DA) and TDMA techniques. This paper discusses its system design, in particular from the viewpoints of a network configuration, a DA control, and a satellite channel-assignment algorithm. A satellite channel configuration is also presented along with a tradeoff study on transmission rate, HPA output power, satellite resource efficiency, service quality, and so on.

  14. The use of irradiated corneal patch grafts in pediatric Ahmed drainage implant surgery.

    PubMed

    Nolan, Kaitlyn Wallace; Lucas, Jordyn; Abbasian, Javaneh

    2015-10-01

    To describe the use of irradiated cornea for scleral reinforcement in Ahmed glaucoma valve drainage implant (AGV) devices in children. The medical records of patients <18 years of age who underwent AGV surgery with irradiated cornea as scleral reinforcement were reviewed retrospectively. The primary outcome measure was erosion of the drainage tube through the corneal patch graft. Secondary outcome measures included other major complications: persistent inflammation, wound dehiscence, transmission of infectious disease, endophthalmitis, and tube/plate self-explantation. A total of 25 procedures (20 patients) met inclusion criteria. Average patient age was 70 months (range, 2 months to 17 years). Mean follow-up was 24.8 months (range, 6 months to 6.2 years). One tube experienced conjunctival exposure through two separate corneal grafts (2/25 cases [8%]), sequentially in the same eye. The first event occurred at month 3.5 after primary implantation of the tube shunt; the second erosion occurred following revision of the existing implant at month 1.5 postoperatively. There were 2 cases of auto-explantation, 2 cases of wound dehiscence, and 1 case of persistent inflammation. There were no cases of endophthalmitis or other infections. To our knowledge, this is the first report describing the use of corneal patch grafts in children. Irradiated cornea improves cosmesis and enhances visualization of the tube. The risk of tube exposure was found to be low and comparable to other materials used as a patch graft. Copyright © 2015 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.

  15. Contact replacement for NMR resonance assignment.

    PubMed

    Xiong, Fei; Pandurangan, Gopal; Bailey-Kellogg, Chris

    2008-07-01

    Complementing its traditional role in structural studies of proteins, nuclear magnetic resonance (NMR) spectroscopy is playing an increasingly important role in functional studies. NMR dynamics experiments characterize motions involved in target recognition, ligand binding, etc., while NMR chemical shift perturbation experiments identify and localize protein-protein and protein-ligand interactions. The key bottleneck in these studies is to determine the backbone resonance assignment, which allows spectral peaks to be mapped to specific atoms. This article develops a novel approach to address that bottleneck, exploiting an available X-ray structure or homology model to assign the entire backbone from a set of relatively fast and cheap NMR experiments. We formulate contact replacement for resonance assignment as the problem of computing correspondences between a contact graph representing the structure and an NMR graph representing the data; the NMR graph is a significantly corrupted, ambiguous version of the contact graph. We first show that by combining connectivity and amino acid type information, and exploiting the random structure of the noise, one can provably determine unique correspondences in polynomial time with high probability, even in the presence of significant noise (a constant number of noisy edges per vertex). We then detail an efficient randomized algorithm and show that, over a variety of experimental and synthetic datasets, it is robust to typical levels of structural variation (1-2 AA), noise (250-600%) and missings (10-40%). Our algorithm achieves very good overall assignment accuracy, above 80% in alpha-helices, 70% in beta-sheets and 60% in loop regions. Our contact replacement algorithm is implemented in platform-independent Python code. The software can be freely obtained for academic use by request from the authors.

  16. A new method for solving routing and wavelength assignment problems under inaccurate routing information in optical networks with conversion capability

    NASA Astrophysics Data System (ADS)

    Luo, Yanting; Zhang, Yongjun; Gu, Wanyi

    2009-11-01

    In large dynamic networks it is extremely difficult to maintain accurate routing information on all network nodes. The existing studies have illustrated the impact of imprecise state information on the performance of dynamic routing and wavelength assignment (RWA) algorithms. An algorithm called Bypass Based Optical Routing (BBOR) proposed by Xavier Masip-Bruin et al can reduce the effects of having inaccurate routing information in networks operating under the wavelength-continuity constraint. Then they extended the BBOR mechanism (for convenience it's called EBBOR mechanism below) to be applied to the networks with sparse and limited wavelength conversion. But it only considers the characteristic of wavelength conversion in the step of computing the bypass-paths so that its performance may decline with increasing the degree of wavelength translation (this concept will be explained in the section of introduction again). We will demonstrate the issue through theoretical analysis and introduce a novel algorithm which modifies both the lightpath selection and the bypass-paths computation in comparison to EBBOR algorithm. Simulations show that the Modified EBBOR (MEBBOR) algorithm improves the blocking performance significantly in optical networks with Conversion Capability.

  17. Future aircraft networks and schedules

    NASA Astrophysics Data System (ADS)

    Shu, Yan

    2011-07-01

    Because of the importance of air transportation scheduling, the emergence of small aircraft and the vision of future fuel-efficient aircraft, this thesis has focused on the study of aircraft scheduling and network design involving multiple types of aircraft and flight services. It develops models and solution algorithms for the schedule design problem and analyzes the computational results. First, based on the current development of small aircraft and on-demand flight services, this thesis expands a business model for integrating on-demand flight services with the traditional scheduled flight services. This thesis proposes a three-step approach to the design of aircraft schedules and networks from scratch under the model. In the first step, both a frequency assignment model for scheduled flights that incorporates a passenger path choice model and a frequency assignment model for on-demand flights that incorporates a passenger mode choice model are created. In the second step, a rough fleet assignment model that determines a set of flight legs, each of which is assigned an aircraft type and a rough departure time is constructed. In the third step, a timetable model that determines an exact departure time for each flight leg is developed. Based on the models proposed in the three steps, this thesis creates schedule design instances that involve almost all the major airports and markets in the United States. The instances of the frequency assignment model created in this thesis are large-scale non-convex mixed-integer programming problems, and this dissertation develops an overall network structure and proposes iterative algorithms for solving these instances. The instances of both the rough fleet assignment model and the timetable model created in this thesis are large-scale mixed-integer programming problems, and this dissertation develops subproblem schemes for solving these instances. Based on these solution algorithms, this dissertation also presents computational results of these large-scale instances. To validate the models and solution algorithms developed, this thesis also compares the daily flight schedules that it designs with the schedules of the existing airlines. Furthermore, it creates instances that represent different economic and fuel-prices conditions and derives schedules under these different conditions. In addition, it discusses the implication of using new aircraft in the future flight schedules. Finally, future research in three areas---model, computational method, and simulation for validation---is proposed.

  18. Manycast routing, modulation level and spectrum assignment over elastic optical networks

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Zhao, Yang; Chen, Xue; Wang, Lei; Zhang, Min; Zhang, Jie; Ji, Yuefeng; Wang, Huitao; Wang, Taili

    2017-07-01

    Manycast is a point to multi-point transmission framework that requires a subset of destination nodes successfully reached. It is particularly applicable for dealing with large amounts of data simultaneously in bandwidth-hungry, dynamic and cloud-based applications. As rapid increasing of traffics in these applications, the elastic optical networks (EONs) may be relied on to achieve high throughput manycast. In terms of finer spectrum granularity, the EONs could reach flexible accessing to network spectrum and efficient providing exact spectrum resource to demands. In this paper, we focus on the manycast routing, modulation level and spectrum assignment (MA-RMLSA) problem in EONs. Both EONs planning with static manycast traffic and EONs provisioning with dynamic manycast traffic are investigated. An integer linear programming (ILP) model is formulated to derive MA-RMLSA problem in static manycast scenario. Then corresponding heuristic algorithm called manycast routing, modulation level and spectrum assignment genetic algorithm (MA-RMLSA-GA) is proposed to adapt for both static and dynamic manycast scenarios. The MA-RMLSA-GA optimizes MA-RMLSA problem in destination nodes selection, routing light-tree constitution, modulation level allocation and spectrum resource assignment jointly, to achieve an effective improvement in network performance. Simulation results reveal that MA-RMLSA strategies offered by MA-RMLSA-GA have slightly disparity from the optimal solutions provided by ILP model in static scenario. Moreover, the results demonstrate that MA-RMLSA-GA realizes a highly efficient MA-RMLSA strategy with the lowest blocking probability in dynamic scenario compared with benchmark algorithms.

  19. On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment

    PubMed Central

    Alonso-Mora, Javier; Samaranayake, Samitha; Wallar, Alex; Frazzoli, Emilio; Rus, Daniela

    2017-01-01

    Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems. PMID:28049820

  20. On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment.

    PubMed

    Alonso-Mora, Javier; Samaranayake, Samitha; Wallar, Alex; Frazzoli, Emilio; Rus, Daniela

    2017-01-17

    Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems.

  1. Knowledge-Based Scheduling of Arrival Aircraft in the Terminal Area

    NASA Technical Reports Server (NTRS)

    Krzeczowski, K. J.; Davis, T.; Erzberger, H.; Lev-Ram, Israel; Bergh, Christopher P.

    1995-01-01

    A knowledge based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real time simulation. The scheduling system automatically sequences, assigns landing times, and assign runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithm is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reductions, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithm is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper describes the scheduling algorithms, gives examples of their use, and presents data regarding their potential benefits to the air traffic system.

  2. Knowledge-based scheduling of arrival aircraft

    NASA Technical Reports Server (NTRS)

    Krzeczowski, K.; Davis, T.; Erzberger, H.; Lev-Ram, I.; Bergh, C.

    1995-01-01

    A knowledge-based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real-time simulation. The scheduling system automatically sequences, assigns landing times, and assigns runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithms is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real-time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reduction, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithms is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper will describe the scheduling algorithms, give examples of their use, and present data regarding their potential benefits to the air traffic system.

  3. A scoring algorithm for predicting the presence of adult asthma: a prospective derivation study.

    PubMed

    Tomita, Katsuyuki; Sano, Hiroyuki; Chiba, Yasutaka; Sato, Ryuji; Sano, Akiko; Nishiyama, Osamu; Iwanaga, Takashi; Higashimoto, Yuji; Haraguchi, Ryuta; Tohda, Yuji

    2013-03-01

    To predict the presence of asthma in adult patients with respiratory symptoms, we developed a scoring algorithm using clinical parameters. We prospectively analysed 566 adult outpatients who visited Kinki University Hospital for the first time with complaints of nonspecific respiratory symptoms. Asthma was comprehensively diagnosed by specialists using symptoms, signs, and objective tools including bronchodilator reversibility and/or the assessment of bronchial hyperresponsiveness (BHR). Multiple logistic regression analysis was performed to categorise patients and determine the accuracy of diagnosing asthma. A scoring algorithm using the symptom-sign score was developed, based on diurnal variation of symptoms (1 point), recurrent episodes (2 points), medical history of allergic diseases (1 point), and wheeze sound (2 points). A score of >3 had 35% sensitivity and 97% specificity for discriminating between patients with and without asthma and assigned a high probability of having asthma (accuracy 90%). A score of 1 or 2 points assigned intermediate probability (accuracy 68%). After providing additional data of forced expiratory volume in 1 second/forced vital capacity (FEV(1)/FVC) ratio <0.7, the post-test probability of having asthma was increased to 93%. A score of 0 points assigned low probability (accuracy 31%). After providing additional data of positive reversibility, the post-test probability of having asthma was increased to 88%. This pragmatic diagnostic algorithm is useful for predicting the presence of adult asthma and for determining the appropriate time for consultation with a pulmonologist.

  4. A novel algorithm for validating peptide identification from a shotgun proteomics search engine.

    PubMed

    Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J

    2013-03-01

    Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.

  5. A Probabilistic Framework for Peptide and Protein Quantification from Data-Dependent and Data-Independent LC-MS Proteomics Experiments

    PubMed Central

    Richardson, Keith; Denny, Richard; Hughes, Chris; Skilling, John; Sikora, Jacek; Dadlez, Michał; Manteca, Angel; Jung, Hye Ryung; Jensen, Ole Nørregaard; Redeker, Virginie; Melki, Ronald; Langridge, James I.; Vissers, Johannes P.C.

    2013-01-01

    A probability-based quantification framework is presented for the calculation of relative peptide and protein abundance in label-free and label-dependent LC-MS proteomics data. The results are accompanied by credible intervals and regulation probabilities. The algorithm takes into account data uncertainties via Poisson statistics modified by a noise contribution that is determined automatically during an initial normalization stage. Protein quantification relies on assignments of component peptides to the acquired data. These assignments are generally of variable reliability and may not be present across all of the experiments comprising an analysis. It is also possible for a peptide to be identified to more than one protein in a given mixture. For these reasons the algorithm accepts a prior probability of peptide assignment for each intensity measurement. The model is constructed in such a way that outliers of any type can be automatically reweighted. Two discrete normalization methods can be employed. The first method is based on a user-defined subset of peptides, while the second method relies on the presence of a dominant background of endogenous peptides for which the concentration is assumed to be unaffected. Normalization is performed using the same computational and statistical procedures employed by the main quantification algorithm. The performance of the algorithm will be illustrated on example data sets, and its utility demonstrated for typical proteomics applications. The quantification algorithm supports relative protein quantification based on precursor and product ion intensities acquired by means of data-dependent methods, originating from all common isotopically-labeled approaches, as well as label-free ion intensity-based data-independent methods. PMID:22871168

  6. Computer-based coding of free-text job descriptions to efficiently identify occupations in epidemiological studies

    PubMed Central

    Russ, Daniel E.; Ho, Kwan-Yuet; Colt, Joanne S.; Armenti, Karla R.; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P.; Karagas, Margaret R.; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T.; Johnson, Calvin A.; Friesen, Melissa C.

    2016-01-01

    Background Mapping job titles to standardized occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiologic studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Methods Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14,983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in two occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. Results For 11,991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6- and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (kappa: 0.6–0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Conclusions Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiologic studies. PMID:27102331

  7. ASSURED CLOUD COMPUTING UNIVERSITY CENTER OFEXCELLENCE (ACC UCOE)

    DTIC Science & Technology

    2018-01-18

    average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed...infrastructure security -Design of algorithms and techniques for real- time assuredness in cloud computing -Map-reduce task assignment with data locality...46 DESIGN OF ALGORITHMS AND TECHNIQUES FOR REAL- TIME ASSUREDNESS IN CLOUD COMPUTING

  8. GeoSearcher: Location-Based Ranking of Search Engine Results.

    ERIC Educational Resources Information Center

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  9. Fuzzy-logic based Q-Learning interference management algorithms in two-tier networks

    NASA Astrophysics Data System (ADS)

    Xu, Qiang; Xu, Zezhong; Li, Li; Zheng, Yan

    2017-10-01

    Unloading from macrocell network and enhancing coverage can be realized by deploying femtocells in the indoor scenario. However, the system performance of the two-tier network could be impaired by the co-tier and cross-tier interference. In this paper, a distributed resource allocation scheme is studied when each femtocell base station is self-governed and the resource cannot be assigned centrally through the gateway. A novel Q-Learning interference management scheme is proposed, that is divided into cooperative and independent part. In the cooperative algorithm, the interference information is exchanged between the cell-edge users which are classified by the fuzzy logic in the same cell. Meanwhile, we allocate the orthogonal subchannels to the high-rate cell-edge users to disperse the interference power when the data rate requirement is satisfied. The resource is assigned directly according to the minimum power principle in the independent algorithm. Simulation results are provided to demonstrate the significant performance improvements in terms of the average data rate, interference power and energy efficiency over the cutting-edge resource allocation algorithms.

  10. Determination of carrier yields for neutron activation analysis using energy dispersive X-ray spectrometry

    USGS Publications Warehouse

    Johnson, R.G.; Wandless, G.A.

    1984-01-01

    A new method is described for determining carrier yield in the radiochemical neutron activation analysis of rare-earth elements in silicate rocks by group separation. The method involves the determination of the rare-earth elements present in the carrier by means of energy-dispersive X-ray fluorescence analysis, eliminating the need to re-irradiate samples in a nuclear reactor after the gamma ray analysis is complete. Results from the analysis of USGS standards AGV-1 and BCR-1 compare favorably with those obtained using the conventional method. ?? 1984 Akade??miai Kiado??.

  11. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    PubMed

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Improved tissue assignment using dual-energy computed tomography in low-dose rate prostate brachytherapy for Monte Carlo dose calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Côté, Nicolas; Bedwani, Stéphane; Carrier, Jean-François, E-mail: jean-francois.carrier.chum@ssss.gouv.qc.ca

    Purpose: An improvement in tissue assignment for low-dose rate brachytherapy (LDRB) patients using more accurate Monte Carlo (MC) dose calculation was accomplished with a metallic artifact reduction (MAR) method specific to dual-energy computed tomography (DECT). Methods: The proposed MAR algorithm followed a four-step procedure. The first step involved applying a weighted blend of both DECT scans (I {sub H/L}) to generate a new image (I {sub Mix}). This action minimized Hounsfield unit (HU) variations surrounding the brachytherapy seeds. In the second step, the mean HU of the prostate in I {sub Mix} was calculated and shifted toward the mean HUmore » of the two original DECT images (I {sub H/L}). The third step involved smoothing the newly shifted I {sub Mix} and the two original I {sub H/L}, followed by a subtraction of both, generating an image that represented the metallic artifact (I {sub A,(H/L)}) of reduced noise levels. The final step consisted of subtracting the original I {sub H/L} from the newly generated I {sub A,(H/L)} and obtaining a final image corrected for metallic artifacts. Following the completion of the algorithm, a DECT stoichiometric method was used to extract the relative electronic density (ρ{sub e}) and effective atomic number (Z {sub eff}) at each voxel of the corrected scans. Tissue assignment could then be determined with these two newly acquired physical parameters. Each voxel was assigned the tissue bearing the closest resemblance in terms of ρ{sub e} and Z {sub eff}, comparing with values from the ICRU 42 database. A MC study was then performed to compare the dosimetric impacts of alternative MAR algorithms. Results: An improvement in tissue assignment was observed with the DECT MAR algorithm, compared to the single-energy computed tomography (SECT) approach. In a phantom study, tissue misassignment was found to reach 0.05% of voxels using the DECT approach, compared with 0.40% using the SECT method. Comparison of the DECT and SECT D {sub 90} dose parameter (volume receiving 90% of the dose) indicated that D {sub 90} could be underestimated by up to 2.3% using the SECT method. Conclusions: The DECT MAR approach is a simple alternative to reduce metallic artifacts found in LDRB patient scans. Images can be processed quickly and do not require the determination of x-ray spectra. Substantial information on density and atomic number can also be obtained. Furthermore, calcifications within the prostate are detected by the tissue assignment algorithm. This enables more accurate, patient-specific MC dose calculations.« less

  13. An ontology-based nurse call management system (oNCS) with probabilistic priority assessment

    PubMed Central

    2011-01-01

    Background The current, place-oriented nurse call systems are very static. A patient can only make calls with a button which is fixed to a wall of a room. Moreover, the system does not take into account various factors specific to a situation. In the future, there will be an evolution to a mobile button for each patient so that they can walk around freely and still make calls. The system would become person-oriented and the available context information should be taken into account to assign the correct nurse to a call. The aim of this research is (1) the design of a software platform that supports the transition to mobile and wireless nurse call buttons in hospitals and residential care and (2) the design of a sophisticated nurse call algorithm. This algorithm dynamically adapts to the situation at hand by taking the profile information of staff members and patients into account. Additionally, the priority of a call probabilistically depends on the risk factors, assigned to a patient. Methods The ontology-based Nurse Call System (oNCS) was developed as an extension of a Context-Aware Service Platform. An ontology is used to manage the profile information. Rules implement the novel nurse call algorithm that takes all this information into account. Probabilistic reasoning algorithms are designed to determine the priority of a call based on the risk factors of the patient. Results The oNCS system is evaluated through a prototype implementation and simulations, based on a detailed dataset obtained from Ghent University Hospital. The arrival times of nurses at the location of a call, the workload distribution of calls amongst nurses and the assignment of priorities to calls are compared for the oNCS system and the current, place-oriented nurse call system. Additionally, the performance of the system is discussed. Conclusions The execution time of the nurse call algorithm is on average 50.333 ms. Moreover, the oNCS system significantly improves the assignment of nurses to calls. Calls generally have a nurse present faster and the workload-distribution amongst the nurses improves. PMID:21294860

  14. Solving multiconstraint assignment problems using learning automata.

    PubMed

    Horn, Geir; Oommen, B John

    2010-02-01

    This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. This feature, where we permit possibly contradictory constraints, distinguishes this paper from the state of the art. Indeed, we are aware of no learning automata (or other heuristic) solutions which solve this problem in its most general setting. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the pioneering LA solutions to this problem, unequivocally demonstrates that LA can play an important role in solving complex combinatorial and integer optimization problems.

  15. Diagnostic Performance of SRU and ATA Thyroid Nodule Classification Algorithms as Tested With a 1 Million Virtual Thyroid Nodule Model.

    PubMed

    Boehnke, Mitchell; Patel, Nayana; McKinney, Kristin; Clark, Toshimasa

    The Society of Radiologists in Ultrasound (SRU 2005) and American Thyroid Association (ATA 2009 and ATA 2015) have published algorithms regarding thyroid nodule management. Kwak et al. and other groups have described models that estimate thyroid nodules' malignancy risk. The aim of our study is to use Kwak's model to evaluate the tradeoffs of both sensitivity and specificity of SRU 2005, ATA 2009 and ATA 2015 management algorithms. 1,000,000 thyroid nodules were modeled in MATLAB. Ultrasound characteristics were modeled after published data. Malignancy risk was estimated per Kwak's model and assigned as a binary variable. All nodules were then assessed using the published management algorithms. With the malignancy variable as condition positivity and algorithms' recommendation for FNA as test positivity, diagnostic performance was calculated. Modeled nodule characteristics mimic those of Kwak et al. 12.8% nodules were assigned as malignant (malignancy risk range of 2.0-98%). FNA was recommended for 41% of nodules by SRU 2005, 66% by ATA 2009, and 82% by ATA 2015. Sensitivity and specificity is significantly different (< 0.0001): 49% and 60% for SRU; 81% and 36% for ATA 2009; and 95% and 20% for ATA 2015. SRU 2005, ATA 2009 and ATA 2015 algorithms are used routinely in clinical practice to determine whether thyroid nodule biopsy is indicated. We demonstrate significant differences in these algorithms' diagnostic performance, which result in a compromise between sensitivity and specificity. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Fast algorithm for automatically computing Strahler stream order

    USGS Publications Warehouse

    Lanfear, Kenneth J.

    1990-01-01

    An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.

  17. Overcoming an obstacle in expanding a UMLS semantic type extent.

    PubMed

    Chen, Yan; Gu, Huanying; Perl, Yehoshua; Geller, James

    2012-02-01

    This paper strives to overcome a major problem encountered by a previous expansion methodology for discovering concepts highly likely to be missing a specific semantic type assignment in the UMLS. This methodology is the basis for an algorithm that presents the discovered concepts to a human auditor for review and possible correction. We analyzed the problem of the previous expansion methodology and discovered that it was due to an obstacle constituted by one or more concepts assigned the UMLS Semantic Network semantic type Classification. A new methodology was designed that bypasses such an obstacle without a combinatorial explosion in the number of concepts presented to the human auditor for review. The new expansion methodology with obstacle avoidance was tested with the semantic type Experimental Model of Disease and found over 500 concepts missed by the previous methodology that are in need of this semantic type assignment. Furthermore, other semantic types suffering from the same major problem were discovered, indicating that the methodology is of more general applicability. The algorithmic discovery of concepts that are likely missing a semantic type assignment is possible even in the face of obstacles, without an explosion in the number of processed concepts. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Overcoming an Obstacle in Expanding a UMLS Semantic Type Extent

    PubMed Central

    Chen, Yan; Gu, Huanying; Perl, Yehoshua; Geller, James

    2011-01-01

    This paper strives to overcome a major problem encountered by a previous expansion methodology for discovering concepts highly likely to be missing a specific semantic type assignment in the UMLS. This methodology is the basis for an algorithm that presents the discovered concepts to a human auditor for review and possible correction. We analyzed the problem of the previous expansion methodology and discovered that it was due to an obstacle constituted by one or more concepts assigned the UMLS Semantic Network semantic type Classification. A new methodology was designed that bypasses such an obstacle without a combinatorial explosion in the number of concepts presented to the human auditor for review. The new expansion methodology with obstacle avoidance was tested with the semantic type Experimental Model of Disease and found over 500 concepts missed by the previous methodology that are in need of this semantic type assignment. Furthermore, other semantic types suffering from the same major problem were discovered, indicating that the methodology is of more general applicability. The algorithmic discovery of concepts that are likely missing a semantic type assignment is possible even in the face of obstacles, without an explosion in the number of processed concepts. PMID:21925287

  19. Relationship auditing of the FMA ontology

    PubMed Central

    Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai

    2010-01-01

    The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727

  20. Deep Learning in Intermediate Microeconomics: Using Scaffolding Assignments to Teach Theory and Promote Transfer

    ERIC Educational Resources Information Center

    Green, Gareth P.; Bean, John C.; Peterson, Dean J.

    2013-01-01

    Intermediate microeconomics is typically viewed as a theory and tools course that relies on algorithmic problems to help students learn and apply economic theory. However, the authors' assessment research suggests that algorithmic problems by themselves do not encourage students to think about where the theory comes from, why the theory is…

  1. Binary Bees Algorithm - bioinspiration from the foraging mechanism of honeybees to optimize a multiobjective multidimensional assignment problem

    NASA Astrophysics Data System (ADS)

    Xu, Shuo; Ji, Ze; Truong Pham, Duc; Yu, Fan

    2011-11-01

    The simultaneous mission assignment and home allocation for hospital service robots studied is a Multidimensional Assignment Problem (MAP) with multiobjectives and multiconstraints. A population-based metaheuristic, the Binary Bees Algorithm (BBA), is proposed to optimize this NP-hard problem. Inspired by the foraging mechanism of honeybees, the BBA's most important feature is an explicit functional partitioning between global search and local search for exploration and exploitation, respectively. Its key parts consist of adaptive global search, three-step elitism selection (constraint handling, non-dominated solutions selection, and diversity preservation), and elites-centred local search within a Hamming neighbourhood. Two comparative experiments were conducted to investigate its single objective optimization, optimization effectiveness (indexed by the S-metric and C-metric) and optimization efficiency (indexed by computational burden and CPU time) in detail. The BBA outperformed its competitors in almost all the quantitative indices. Hence, the above overall scheme, and particularly the searching history-adapted global search strategy was validated.

  2. Wavelength assignment algorithm considering the state of neighborhood links for OBS networks

    NASA Astrophysics Data System (ADS)

    Tanaka, Yu; Hirota, Yusuke; Tode, Hideki; Murakami, Koso

    2005-10-01

    Recently, Optical WDM technology is introduced into backbone networks. On the other hand, as the future optical switching scheme, Optical Burst Switching (OBS) systems become a realistic solution. OBS systems do not consider buffering in intermediate nodes. Thus, it is an important issue to avoid overlapping wavelength reservation between partially interfered paths. To solve this problem, so far, the wavelength assignment scheme which has priority management tables has been proposed. This method achieves the reduction of burst blocking probability. However, this priority management table requires huge memory space. In this paper, we propose a wavelength assignment algorithm that reduces both the number of priority management tables and burst blocking probability. To reduce priority management tables, we allocate and manage them for each link. To reduce burst blocking probability, our method announces information about the change of their priorities to intermediate nodes. We evaluate its performance in terms of the burst blocking probability and the reduction rate of priority management tables.

  3. Efficacy and safety of intravitreal bevacizumab in eyes with neovascular glaucoma undergoing Ahmed glaucoma valve implantation: 2-year follow-up.

    PubMed

    Arcieri, Enyr S; Paula, Jayter S; Jorge, Rodrigo; Barella, Kleyton A; Arcieri, Rafael S; Secches, Danilo J; Costa, Vital P

    2015-02-01

    To evaluate the efficacy and safety of intravitreal bevacizumab (IVB) in eyes with neovascular glaucoma (NVG) undergoing Ahmed glaucoma valve (AGV) implantation. This was a multicentre, prospective, randomized clinical trial that enrolled 40 patients with uncontrolled neovascular glaucoma that had undergone panretinal photocoagulation and required glaucoma drainage device implantation. Patients were randomized to receive IVB (1.25 mg) or not during Ahmed valve implant surgery. Injections were administered intra-operatively, and 4 and 8 weeks after surgery. After a mean follow-up of 2.25 ± 0.67 years (range 1.5-3 years), both groups showed a significant decrease in IOP (p < 0.05). There was no difference in IOP between groups except at the 18-month interval, when IOP in IVB group was significantly lower (14.57 ± 1.72 mmHg vs. 18.37 ± 1.06 mmHg - p = 0.0002). There was no difference in survival success rates between groups. At 24 months, there was a trend to patients treated with IVB using less antiglaucoma medications than the control group (p = 0.0648). Complete regression of rubeosis iridis was significantly more frequent in the IVB group (80%) than in the control group (25%) (p = 0.0015). Intravitreal bevacizumab may lead to regression of new vessels both in the iris and in the anterior chamber angle in patients with neovascular glaucoma undergoing Ahmed glaucoma valve implantation. There is a trend to slightly lower IOPs and number of medications with IVB use during AGV implantation for neovascular glaucoma. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  4. Improvement of gas-adsorption performances of Ag-functionalized monolayer MoS2 surfaces: A first-principles study

    NASA Astrophysics Data System (ADS)

    Song, Jian; Lou, Huan

    2018-05-01

    Investigations of the adsorptions of representative gases (NO2, NH3, H2S, SO2, CO, and HCHO) on different Ag-functionalized monolayer MoS2 surfaces were performed by first principles methods. The adsorption configurations, adsorption energies, electronic structure properties, and charge transfer were calculated, and the results show that the adsorption activities to gases of monolayer MoS2 are dramatically enhanced by the Ag-modification. The Ag-modified perfect MoS2 (Ag-P) and MoS2 with S-vacancy (Ag-Vs) substrates exhibit a more superior adsorption activity to NO2 than other gases, which is consistent with the experimental reports. The charge transfer processes of different molecules adsorbed on different surfaces exhibit various characteristics, with potential benefits to gas selectivity. For instance, the NO2 and SO2 obtain more electrons from both Ag-P and Ag-Vs substrates but the NH3 and H2S donate more electrons to materials than others. In addition, the CO and HCHO possess totally opposite charge transfer directs on both substrates, respectively. The BS and PDOS calculations show that semiconductor types of gas/Ag-MoS2 systems are more determined by the metal-functionalization of material, and the directs and numbers of charge transfer process between gases and adsorbents can cause the increase or decline of material resistance theoretically, which is helpful to gas detection and distinction. The further analysis indicates suitable co-operation between the gain-lost electron ability of gas and metallicity of featuring metal might adjust the resistivity of complex and contribute to new thought for metal-functionalization. Our works provide new valuable ideas and theoretical foundation for the potential improvement of MoS2-based gas sensor performances, such as sensitivity and selectivity.

  5. Single machine scheduling with slack due dates assignment

    NASA Astrophysics Data System (ADS)

    Liu, Weiguo; Hu, Xiangpei; Wang, Xuyin

    2017-04-01

    This paper considers a single machine scheduling problem in which each job is assigned an individual due date based on a common flow allowance (i.e. all jobs have slack due date). The goal is to find a sequence for jobs, together with a due date assignment, that minimizes a non-regular criterion comprising the total weighted absolute lateness value and common flow allowance cost, where the weight is a position-dependent weight. In order to solve this problem, an ? time algorithm is proposed. Some extensions of the problem are also shown.

  6. Research on the control of large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1983-01-01

    The research effort on the control of large space structures at the University of Houston has concentrated on the mathematical theory of finite-element models; identification of the mass, damping, and stiffness matrix; assignment of damping to structures; and decoupling of structure dynamics. The objective of the work has been and will continue to be the development of efficient numerical algorithms for analysis, control, and identification of large space structures. The major consideration in the development of the algorithms has been the large number of equations that must be handled by the algorithm as well as sensitivity of the algorithms to numerical errors.

  7. Deriving flow directions for coarse-resolution (1-4 km) gridded hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Reed, Seann M.

    2003-09-01

    The National Weather Service Hydrology Laboratory (NWS-HL) is currently testing a grid-based distributed hydrologic model at a resolution (4 km) commensurate with operational, radar-based precipitation products. To implement distributed routing algorithms in this framework, a flow direction must be assigned to each model cell. A new algorithm, referred to as cell outlet tracing with an area threshold (COTAT) has been developed to automatically, accurately, and efficiently assign flow directions to any coarse-resolution grid cells using information from any higher-resolution digital elevation model. Although similar to previously published algorithms, this approach offers some advantages. Use of an area threshold allows more control over the tendency for producing diagonal flow directions. Analyses of results at different output resolutions ranging from 300 m to 4000 m indicate that it is possible to choose an area threshold that will produce minimal differences in average network flow lengths across this range of scales. Flow direction grids at a 4 km resolution have been produced for the conterminous United States.

  8. From Natural Variation to Optimal Policy? The Lucas Critique Meets Peer Effects. NBER Working Paper No. 16865

    ERIC Educational Resources Information Center

    Carrell, Scott E.; Sacerdote, Bruce I.; West, James E.

    2011-01-01

    We take cohorts of entering freshmen at the United States Air Force Academy and assign half to peer groups with the goal of maximizing the academic performance of the lowest ability students. Our assignment algorithm uses peer effects estimates from the observational data. We find a negative and significant treatment effect for the students we…

  9. Multi-Agent Task Negotiation Among UAVs to Defend Against Swarm Attacks

    DTIC Science & Technology

    2012-03-01

    are based on economic models [39]. Auction methods of task coordination also attempt to deal with agents dealing with noisy, dynamic environments...August 2006. [34] M. Alighanbari, “ Robust and decentralized task assignment algorithms for uavs,” Ph.D. dissertation, Massachusetts Institute of Technology...Implicit Coordination . . . . . . . . . . . . . 12 2.4 Decentralized Algorithm B - Market- Based . . . . . . . . . . . . . . . . 12 2.5 Decentralized

  10. iNJclust: Iterative Neighbor-Joining Tree Clustering Framework for Inferring Population Structure.

    PubMed

    Limpiti, Tulaya; Amornbunchornvej, Chainarong; Intarapanich, Apichart; Assawamakin, Anunchai; Tongsima, Sissades

    2014-01-01

    Understanding genetic differences among populations is one of the most important issues in population genetics. Genetic variations, e.g., single nucleotide polymorphisms, are used to characterize commonality and difference of individuals from various populations. This paper presents an efficient graph-based clustering framework which operates iteratively on the Neighbor-Joining (NJ) tree called the iNJclust algorithm. The framework uses well-known genetic measurements, namely the allele-sharing distance, the neighbor-joining tree, and the fixation index. The behavior of the fixation index is utilized in the algorithm's stopping criterion. The algorithm provides an estimated number of populations, individual assignments, and relationships between populations as outputs. The clustering result is reported in the form of a binary tree, whose terminal nodes represent the final inferred populations and the tree structure preserves the genetic relationships among them. The clustering performance and the robustness of the proposed algorithm are tested extensively using simulated and real data sets from bovine, sheep, and human populations. The result indicates that the number of populations within each data set is reasonably estimated, the individual assignment is robust, and the structure of the inferred population tree corresponds to the intrinsic relationships among populations within the data.

  11. A Genetic Algorithm for the Bi-Level Topological Design of Local Area Networks

    PubMed Central

    Camacho-Vallejo, José-Fernando; Mar-Ortiz, Julio; López-Ramos, Francisco; Rodríguez, Ricardo Pedraza

    2015-01-01

    Local access networks (LAN) are commonly used as communication infrastructures which meet the demand of a set of users in the local environment. Usually these networks consist of several LAN segments connected by bridges. The topological LAN design bi-level problem consists on assigning users to clusters and the union of clusters by bridges in order to obtain a minimum response time network with minimum connection cost. Therefore, the decision of optimally assigning users to clusters will be made by the leader and the follower will make the decision of connecting all the clusters while forming a spanning tree. In this paper, we propose a genetic algorithm for solving the bi-level topological design of a Local Access Network. Our solution method considers the Stackelberg equilibrium to solve the bi-level problem. The Stackelberg-Genetic algorithm procedure deals with the fact that the follower’s problem cannot be optimally solved in a straightforward manner. The computational results obtained from two different sets of instances show that the performance of the developed algorithm is efficient and that it is more suitable for solving the bi-level problem than a previous Nash-Genetic approach. PMID:26102502

  12. Trajectory optimization of spacecraft high-thrust orbit transfer using a modified evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shirazi, Abolfazl

    2016-10-01

    This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.

  13. TaDb: A time-aware diffusion-based recommender algorithm

    NASA Astrophysics Data System (ADS)

    Li, Wen-Jun; Xu, Yuan-Yuan; Dong, Qiang; Zhou, Jun-Lin; Fu, Yan

    2015-02-01

    Traditional recommender algorithms usually employ the early and recent records indiscriminately, which overlooks the change of user interests over time. In this paper, we show that the interests of a user remain stable in a short-term interval and drift during a long-term period. Based on this observation, we propose a time-aware diffusion-based (TaDb) recommender algorithm, which assigns different temporal weights to the leading links existing before the target user's collection and the following links appearing after that in the diffusion process. Experiments on four real datasets, Netflix, MovieLens, FriendFeed and Delicious show that TaDb algorithm significantly improves the prediction accuracy compared with the algorithms not considering temporal effects.

  14. An adaptive grid algorithm for 3-D GIS landform optimization based on improved ant algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Chenhan; Meng, Lingkui; Deng, Shijun

    2005-07-01

    The key technique of 3-D GIS is to realize quick and high-quality 3-D visualization, in which 3-D roaming system based on landform plays an important role. However how to increase efficiency of 3-D roaming engine and process a large amount of landform data is a key problem in 3-D landform roaming system and improper process of the problem would result in tremendous consumption of system resources. Therefore it has become the key of 3-D roaming system design that how to realize high-speed process of distributed data for landform DEM (Digital Elevation Model) and high-speed distributed modulation of various 3-D landform data resources. In the paper we improved the basic ant algorithm and designed the modulation strategy of 3-D GIS landform resources based on the improved ant algorithm. By initially hypothetic road weights σi , the change of the information factors in the original algorithm would transform from ˜τj to ∆τj+σi and the weights was decided by 3-D computative capacity of various nodes in network environment. So during the course of initial phase of task assignment, increasing the resource information factors of high task-accomplishing rate and decreasing ones of low accomplishing rate would make load accomplishing rate approach the same value as quickly as possible, then in the later process of task assignment, the load balanced ability of the system was further improved. Experimental results show by improving ant algorithm, our system not only decreases many disadvantage of the traditional ant algorithm, but also like ants looking for food effectively distributes the complicated landform algorithm to many computers to process cooperatively and gains a satisfying search result.

  15. Job Shop Scheduling Focusing on Role of Buffer

    NASA Astrophysics Data System (ADS)

    Hino, Rei; Kusumi, Tetsuya; Yoo, Jae-Kyu; Shimizu, Yoshiaki

    A scheduling problem is formulated in order to consistently manage each manufacturing resource, including machine tools, assembly robots, AGV, storehouses, material shelves, and so on. The manufacturing resources are classified into three types: producer, location, and mover. This paper focuses especially on the role of the buffer, and the differences among these types are analyzed. A unified scheduling formulation is derived from the analytical results based on the resource’s roles. Scheduling procedures based on dispatching rules are also proposed in order to numerically evaluate job shop-type production having finite buffer capacity. The influences of the capacity of bottle-necked production devices and the buffer on productivity are discussed.

  16. Low-lying singlet states of carotenoids having 8-13 conjugated double bonds as determined by electronic absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Nakamura, Ryosuke; Kanematsu, Yasuo; Koyama, Yasushi; Nagae, Hiroyoshi; Nishio, Tomohiro; Hashimoto, Hideki; Zhang, Jian-Ping

    2005-07-01

    Electronic absorption spectra were recorded at room temperature in solutions of carotenoids having different numbers of conjugated double bonds, n = 8-13, including a spheroidene derivatives, neurosporene, spheroidene, lycopene, anhydrorhodovibrin and spirilloxanthin. The vibronic states of 1Bu+(v=0-4), 2Ag-(v=0-3), 3Ag- (0) and 1Bu- (0) were clearly identified. The arrangement of the four electronic states determined by electronic absorption spectroscopy was identical to that determined by measurement of resonance Raman excitation profiles [K. Furuichi et al., Chem. Phys. Lett. 356 (2002) 547] for carotenoids in crystals.

  17. Total mass difference statistics algorithm: a new approach to identification of high-mass building blocks in electrospray ionization Fourier transform ion cyclotron mass spectrometry data of natural organic matter.

    PubMed

    Kunenkov, Erast V; Kononikhin, Alexey S; Perminova, Irina V; Hertkorn, Norbert; Gaspar, Andras; Schmitt-Kopplin, Philippe; Popov, Igor A; Garmash, Andrew V; Nikolaev, Evgeniy N

    2009-12-15

    The ultrahigh-resolution Fourier transform ion cyclotron resonance (FTICR) mass spectrum of natural organic matter (NOM) contains several thousand peaks with dozens of molecules matching the same nominal mass. Such a complexity poses a significant challenge for automatic data interpretation, in which the most difficult task is molecular formula assignment, especially in the case of heavy and/or multielement ions. In this study, a new universal algorithm for automatic treatment of FTICR mass spectra of NOM and humic substances based on total mass difference statistics (TMDS) has been developed and implemented. The algorithm enables a blind search for unknown building blocks (instead of a priori known ones) by revealing repetitive patterns present in spectra. In this respect, it differs from all previously developed approaches. This algorithm was implemented in designing FIRAN-software for fully automated analysis of mass data with high peak density. The specific feature of FIRAN is its ability to assign formulas to heavy and/or multielement molecules using "virtual elements" approach. To verify the approach, it was used for processing mass spectra of sodium polystyrene sulfonate (PSS, M(w) = 2200 Da) and polymethacrylate (PMA, M(w) = 3290 Da) which produce heavy multielement and multiply-charged ions. Application of TMDS identified unambiguously monomers present in the polymers consistent with their structure: C(8)H(7)SO(3)Na for PSS and C(4)H(6)O(2) for PMA. It also allowed unambiguous formula assignment to all multiply-charged peaks including the heaviest peak in PMA spectrum at mass 4025.6625 with charge state 6- (mass bias -0.33 ppm). Application of the TMDS-algorithm to processing data on the Suwannee River FA has proven its unique capacities in analysis of spectra with high peak density: it has not only identified the known small building blocks in the structure of FA such as CH(2), H(2), C(2)H(2)O, O but the heavier unit at 154.027 amu. The latter was identified for the first time and assigned a formula C(7)H(6)O(4) consistent with the structure of dihydroxyl-benzoic acids. The presence of these compounds in the structure of FA has so far been numerically suggested but never proven directly. It was concluded that application of the TMDS-algorithm opens new horizons in unfolding molecular complexity of NOM and other natural products.

  18. A new algorithm using cross-assignment for label-free quantitation with LC-LTQ-FT MS.

    PubMed

    Andreev, Victor P; Li, Lingyun; Cao, Lei; Gu, Ye; Rejtar, Tomas; Wu, Shiaw-Lin; Karger, Barry L

    2007-06-01

    A new algorithm is described for label-free quantitation of relative protein abundances across multiple complex proteomic samples. Q-MEND is based on the denoising and peak picking algorithm, MEND, previously developed in our laboratory. Q-MEND takes advantage of the high resolution and mass accuracy of the hybrid LTQ-FT MS mass spectrometer (or other high-resolution mass spectrometers, such as a Q-TOF MS). The strategy, termed "cross-assignment", is introduced to increase substantially the number of quantitated proteins. In this approach, all MS/MS identifications for the set of analyzed samples are combined into a master ID list, and then each LC-MS run is searched for the features that can be assigned to a specific identification from that master list. The reliability of quantitation is enhanced by quantitating separately all peptide charge states, along with a scoring procedure to filter out less reliable peptide abundance measurements. The effectiveness of Q-MEND is illustrated in the relative quantitative analysis of Escherichia coli samples spiked with known amounts of non-E. coli protein digests. A mean quantitation accuracy of 7% and mean precision of 15% is demonstrated. Q-MEND can perform relative quantitation of a set of LC-MS data sets without manual intervention and can generate files compatible with the Guidelines for Proteomic Data Publication.

  19. Leveraging search and content exploration by exploiting context in folksonomy systems

    NASA Astrophysics Data System (ADS)

    Abel, Fabian; Baldoni, Matteo; Baroglio, Cristina; Henze, Nicola; Kawase, Ricardo; Krause, Daniel; Patti, Viviana

    2010-04-01

    With the advent of Web 2.0 tagging became a popular feature in social media systems. People tag diverse kinds of content, e.g. products at Amazon, music at Last.fm, images at Flickr, etc. In the last years several researchers analyzed the impact of tags on information retrieval. Most works focused on tags only and ignored context information. In this article we present context-aware approaches for learning semantics and improve personalized information retrieval in tagging systems. We investigate how explorative search, initialized by clicking on tags, can be enhanced with automatically produced context information so that search results better fit to the actual information needs of the users. We introduce the SocialHITS algorithm and present an experiment where we compare different algorithms for ranking users, tags, and resources in a contextualized way. We showcase our approaches in the domain of images and present the TagMe! system that enables users to explore and tag Flickr pictures. In TagMe! we further demonstrate how advanced context information can easily be generated: TagMe! allows users to attach tag assignments to a specific area within an image and to categorize tag assignments. In our corresponding evaluation we show that those additional facets of tag assignments gain valuable semantics, which can be applied to improve existing search and ranking algorithms significantly.

  20. Computer-based coding of free-text job descriptions to efficiently identify occupations in epidemiological studies.

    PubMed

    Russ, Daniel E; Ho, Kwan-Yuet; Colt, Joanne S; Armenti, Karla R; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P; Karagas, Margaret R; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T; Johnson, Calvin A; Friesen, Melissa C

    2016-06-01

    Mapping job titles to standardised occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiological studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14 983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in 2 occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. For 11 991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6-digit and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (κ 0.6-0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiological studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  1. Development of a job rotation scheduling algorithm for minimizing accumulated work load per body parts.

    PubMed

    Song, JooBong; Lee, Chaiwoo; Lee, WonJung; Bahn, Sangwoo; Jung, ChanJu; Yun, Myung Hwan

    2015-01-01

    For the successful implementation of job rotation, jobs should be scheduled systematically so that physical workload is evenly distributed with the use of various body parts. However, while the potential benefits are widely recognized by research and industry, there is still a need for a more effective and efficient algorithm that considers multiple work-related factors in job rotation scheduling. This study suggests a type of job rotation algorithm that aims to minimize musculoskeletal disorders with the approach of decreasing the overall workload. Multiple work characteristics are evaluated as inputs to the proposed algorithm. Important factors, such as physical workload on specific body parts, working height, involvement of heavy lifting, and worker characteristics such as physical disorders, are included in the algorithm. For evaluation of the overall workload in a given workplace, an objective function was defined to aggregate the scores from the individual factors. A case study, where the algorithm was applied at a workplace, is presented with an examination on its applicability and effectiveness. With the application of the suggested algorithm in case study, the value of the final objective function, which is the weighted sum of the workload in various body parts, decreased by 71.7% when compared to a typical sequential assignment and by 84.9% when compared to a single job assignment, which is doing one job all day. An algorithm was developed using the data from the ergonomic evaluation tool used in the plant and from the known factors related to workload. The algorithm was developed so that it can be efficiently applied with a small amount of required inputs, while covering a wide range of work-related factors. A case study showed that the algorithm was beneficial in determining a job rotation schedule aimed at minimizing workload across body parts.

  2. Single-Sex Schools, Student Achievement, and Course Selection: Evidence from Rule-Based Student Assignments in Trinidad and Tobago. NBER Working Paper No. 16817

    ERIC Educational Resources Information Center

    Jackson, C. Kirabo

    2011-01-01

    Existing studies on single-sex schooling suffer from biases due to student selection to schools and single-sex schools being better in unmeasured ways. In Trinidad and Tobago students are assigned to secondary schools based on an algorithm allowing one to address self-selection bias and cleanly estimate an upper-bound single-sex school effect. The…

  3. Scheduling Jobs and a Variable Maintenance on a Single Machine with Common Due-Date Assignment

    PubMed Central

    Wan, Long

    2014-01-01

    We investigate a common due-date assignment scheduling problem with a variable maintenance on a single machine. The goal is to minimize the total earliness, tardiness, and due-date cost. We derive some properties on an optimal solution for our problem. For a special case with identical jobs we propose an optimal polynomial time algorithm followed by a numerical example. PMID:25147861

  4. On Maximizing the Lifetime of Wireless Sensor Networks by Optimally Assigning Energy Supplies

    PubMed Central

    Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; Gonzalez-Castaño, Francisco Javier

    2013-01-01

    The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively. PMID:23939582

  5. Hierarchical auto-configuration addressing in mobile ad hoc networks (HAAM)

    NASA Astrophysics Data System (ADS)

    Ram Srikumar, P.; Sumathy, S.

    2017-11-01

    Addressing plays a vital role in networking to identify devices uniquely. A device must be assigned with a unique address in order to participate in the data communication in any network. Different protocols defining different types of addressing are proposed in literature. Address auto-configuration is a key requirement for self organizing networks. Existing auto-configuration based addressing protocols require broadcasting probes to all the nodes in the network before assigning a proper address to a new node. This needs further broadcasts to reflect the status of the acquired address in the network. Such methods incur high communication overheads due to repetitive flooding. To address this overhead, a new partially stateful address allocation scheme, namely Hierarchical Auto-configuration Addressing (HAAM) scheme is extended and proposed. Hierarchical addressing basically reduces latency and overhead caused during address configuration. Partially stateful addressing algorithm assigns addresses without the need for flooding and global state awareness, which in turn reduces the communication overhead and space complexity respectively. Nodes are assigned addresses hierarchically to maintain the graph of the network as a spanning tree which helps in effectively avoiding the broadcast storm problem. Proposed algorithm for HAAM handles network splits and merges efficiently in large scale mobile ad hoc networks incurring low communication overheads.

  6. On maximizing the lifetime of Wireless Sensor Networks by optimally assigning energy supplies.

    PubMed

    Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; González-Castano, Francisco Javier

    2013-08-09

    The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively.

  7. QAPgrid: A Two Level QAP-Based Approach for Large-Scale Data Analysis and Visualization

    PubMed Central

    Inostroza-Ponta, Mario; Berretta, Regina; Moscato, Pablo

    2011-01-01

    Background The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain “hidden regularities” and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. Methodology/Principal Findings We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Conclusions/Significance Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics. PMID:21267077

  8. QAPgrid: a two level QAP-based approach for large-scale data analysis and visualization.

    PubMed

    Inostroza-Ponta, Mario; Berretta, Regina; Moscato, Pablo

    2011-01-18

    The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain "hidden regularities" and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics.

  9. Two Different Approaches to Automated Mark Up of Emotions in Text

    NASA Astrophysics Data System (ADS)

    Francisco, Virginia; Hervás, Raqucl; Gervás, Pablo

    This paper presents two different approaches to automated marking up of texts with emotional labels. For the first approach a corpus of example texts previously annotated by human evaluators is mined for an initial assignment of emotional features to words. This results in a List of Emotional Words (LEW) which becomes a useful resource for later automated mark up. The mark up algorithm in this first approach mirrors closely the steps taken during feature extraction, employing for the actual assignment of emotional features a combination of the LEW resource and WordNet for knowledge-based expansion of words not occurring in LEW. The algorithm for automated mark up is tested against new text samples to test its coverage. The second approach mark up texts during their generation. We have a knowledge base which contains the necessary information for marking up the text. This information is related to actions and characters. The algorithm in this case employ the information of the knowledge database and decides the correct emotion for every sentence. The algorithm for automated mark up is tested against four different texts. The results of the two approaches are compared and discussed with respect to three main issues: relative adequacy of each one of the representations used, correctness and coverage of the proposed algorithms, and additional techniques and solutions that may be employed to improve the results.

  10. Study of parameter identification using hybrid neural-genetic algorithm in electro-hydraulic servo system

    NASA Astrophysics Data System (ADS)

    Moon, Byung-Young

    2005-12-01

    The hybrid neural-genetic multi-model parameter estimation algorithm was demonstrated. This method can be applied to structured system identification of electro-hydraulic servo system. This algorithms consist of a recurrent incremental credit assignment(ICRA) neural network and a genetic algorithm. The ICRA neural network evaluates each member of a generation of model and genetic algorithm produces new generation of model. To evaluate the proposed method, electro-hydraulic servo system was designed and manufactured. The experiment was carried out to figure out the hybrid neural-genetic multi-model parameter estimation algorithm. As a result, the dynamic characteristics were obtained such as the parameters(mass, damping coefficient, bulk modulus, spring coefficient), which minimize total square error. The result of this study can be applied to hydraulic systems in industrial fields.

  11. Computer program documentation: ISOCLS iterative self-organizing clustering program, program C094

    NASA Technical Reports Server (NTRS)

    Minter, R. T. (Principal Investigator)

    1972-01-01

    The author has identified the following significant results. This program implements an algorithm which, ideally, sorts a given set of multivariate data points into similar groups or clusters. The program is intended for use in the evaluation of multispectral scanner data; however, the algorithm could be used for other data types as well. The user may specify a set of initial estimated cluster means to begin the procedure, or he may begin with the assumption that all the data belongs to one cluster. The procedure is initiatized by assigning each data point to the nearest (in absolute distance) cluster mean. If no initial cluster means were input, all of the data is assigned to cluster 1. The means and standard deviations are calculated for each cluster.

  12. Customizing FP-growth algorithm to parallel mining with Charm++ library

    NASA Astrophysics Data System (ADS)

    Puścian, Marek

    2017-08-01

    This paper presents a frequent item mining algorithm that was customized to handle growing data repositories. The proposed solution applies Master Slave scheme to frequent pattern growth technique. Efficient utilization of available computation units is achieved by dynamic reallocation of tasks. Conditional frequent trees are assigned to parallel workers basing on their workload. Proposed enhancements have been successfully implemented using Charm++ library. This paper discusses results of the performance of parallelized FP-growth algorithm against different datasets. The approach has been illustrated with many experiments and measurements performed using multiprocessor and multithreaded computer.

  13. A Spiking Neural Network in sEMG Feature Extraction.

    PubMed

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-11-03

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control.

  14. MATSurv: multisensor air traffic surveillance system

    NASA Astrophysics Data System (ADS)

    Yeddanapudi, Murali; Bar-Shalom, Yaakov; Pattipati, Krishna R.; Gassner, Richard R.

    1995-09-01

    This paper deals with the design and implementation of MATSurv 1--an experimental Multisensor Air Traffic Surveillance system. The proposed system consists of a Kalman filter based state estimator used in conjunction with a 2D sliding window assignment algorithm. Real data from two FAA radars is used to evaluate the performance of this algorithm. The results indicate that the proposed algorithm provides a superior classification of the measurements into tracks (i.e., the most likely aircraft trajectories) when compared to the aircraft trajectories obtained using the measurement IDs (squawk or IFF code).

  15. Integrated optimization of location assignment and sequencing in multi-shuttle automated storage and retrieval systems under modified 2n-command cycle pattern

    NASA Astrophysics Data System (ADS)

    Yang, Peng; Peng, Yongfei; Ye, Bin; Miao, Lixin

    2017-09-01

    This article explores the integrated optimization problem of location assignment and sequencing in multi-shuttle automated storage/retrieval systems under the modified 2n-command cycle pattern. The decision of storage and retrieval (S/R) location assignment and S/R request sequencing are jointly considered. An integer quadratic programming model is formulated to describe this integrated optimization problem. The optimal travel cycles for multi-shuttle S/R machines can be obtained to process S/R requests in the storage and retrieval request order lists by solving the model. The small-sized instances are optimally solved using CPLEX. For large-sized problems, two tabu search algorithms are proposed, in which the first come, first served and nearest neighbour are used to generate initial solutions. Various numerical experiments are conducted to examine the heuristics' performance and the sensitivity of algorithm parameters. Furthermore, the experimental results are analysed from the viewpoint of practical application, and a parameter list for applying the proposed heuristics is recommended under different real-life scenarios.

  16. Determining the Effectiveness of Incorporating Geographic Information Into Vehicle Performance Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sera White

    2012-04-01

    This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), themore » use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.« less

  17. Validation of an International Statistical Classification of Diseases and Related Health Problems 10th Revision Coding Algorithm for Hospital Encounters with Hypoglycemia.

    PubMed

    Hodge, Meryl C; Dixon, Stephanie; Garg, Amit X; Clemens, Kristin K

    2017-06-01

    To determine the positive predictive value and sensitivity of an International Statistical Classification of Diseases and Related Health Problems, 10th Revision, coding algorithm for hospital encounters concerning hypoglycemia. We carried out 2 retrospective studies in Ontario, Canada. We examined medical records from 2002 through 2014, in which older adults (mean age, 76) were assigned at least 1 code for hypoglycemia (E15, E160, E161, E162, E1063, E1163, E1363, E1463). The positive predictive value of the algorithm was calculated using a gold-standard definition (blood glucose value <4 mmol/L or physician diagnosis of hypoglycemia). To determine the algorithm's sensitivity, we used linked healthcare databases to identify older adults (mean age, 77) with laboratory plasma glucose values <4 mmol/L during a hospital encounter that took place between 2003 and 2011. We assessed how frequently a code for hypoglycemia was present. We also examined the algorithm's performance in differing clinical settings (e.g. inpatient vs. emergency department, by hypoglycemia severity). The positive predictive value of the algorithm was 94.0% (95% confidence interval 89.3% to 97.0%), and its sensitivity was 12.7% (95% confidence interval 11.9% to 13.5%). It performed better in the emergency department and in cases of more severe hypoglycemia (plasma glucose values <3.5 mmol/L compared with ≥3.5 mmol/L). Our hypoglycemia algorithm has a high positive predictive value but is limited in sensitivity. Although we can be confident that older adults who are assigned 1 of these codes truly had a hypoglycemia event, many episodes will not be captured by studies using administrative databases. Copyright © 2017 Diabetes Canada. Published by Elsevier Inc. All rights reserved.

  18. Streaming fragment assignment for real-time analysis of sequencing experiments

    PubMed Central

    Roberts, Adam; Pachter, Lior

    2013-01-01

    We present eXpress, a software package for highly efficient probabilistic assignment of ambiguously mapping sequenced fragments. eXpress uses a streaming algorithm with linear run time and constant memory use. It can determine abundances of sequenced molecules in real time, and can be applied to ChIP-seq, metagenomics and other large-scale sequencing data. We demonstrate its use on RNA-seq data, showing greater efficiency than other quantification methods. PMID:23160280

  19. FD/DAMA Scheme For Mobile/Satellite Communications

    NASA Technical Reports Server (NTRS)

    Yan, Tsun-Yee; Wang, Charles C.; Cheng, Unjeng; Rafferty, William; Dessouky, Khaled I.

    1992-01-01

    Integrated-Adaptive Mobile Access Protocol (I-AMAP) proposed to allocate communication channels to subscribers in first-generation MSAT-X mobile/satellite communication network. Based on concept of frequency-division/demand-assigned multiple access (FD/DAMA) where partition of available spectrum adapted to subscribers' demands for service. Requests processed, and competing requests resolved according to channel-access protocol, or free-access tree algorithm described in "Connection Protocol for Mobile/Satellite Communications" (NPO-17735). Assigned spectrum utilized efficiently.

  20. Ontological Problem-Solving Framework for Dynamically Configuring Sensor Systems and Algorithms

    PubMed Central

    Qualls, Joseph; Russomanno, David J.

    2011-01-01

    The deployment of ubiquitous sensor systems and algorithms has led to many challenges, such as matching sensor systems to compatible algorithms which are capable of satisfying a task. Compounding the challenges is the lack of the requisite knowledge models needed to discover sensors and algorithms and to subsequently integrate their capabilities to satisfy a specific task. A novel ontological problem-solving framework has been designed to match sensors to compatible algorithms to form synthesized systems, which are capable of satisfying a task and then assigning the synthesized systems to high-level missions. The approach designed for the ontological problem-solving framework has been instantiated in the context of a persistence surveillance prototype environment, which includes profiling sensor systems and algorithms to demonstrate proof-of-concept principles. Even though the problem-solving approach was instantiated with profiling sensor systems and algorithms, the ontological framework may be useful with other heterogeneous sensing-system environments. PMID:22163793

  1. SPHINX--an algorithm for taxonomic binning of metagenomic sequences.

    PubMed

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Singh, Nitin Kumar; Mande, Sharmila S

    2011-01-01

    Compared with composition-based binning algorithms, the binning accuracy and specificity of alignment-based binning algorithms is significantly higher. However, being alignment-based, the latter class of algorithms require enormous amount of time and computing resources for binning huge metagenomic datasets. The motivation was to develop a binning approach that can analyze metagenomic datasets as rapidly as composition-based approaches, but nevertheless has the accuracy and specificity of alignment-based algorithms. This article describes a hybrid binning approach (SPHINX) that achieves high binning efficiency by utilizing the principles of both 'composition'- and 'alignment'-based binning algorithms. Validation results with simulated sequence datasets indicate that SPHINX is able to analyze metagenomic sequences as rapidly as composition-based algorithms. Furthermore, the binning efficiency (in terms of accuracy and specificity of assignments) of SPHINX is observed to be comparable with results obtained using alignment-based algorithms. A web server for the SPHINX algorithm is available at http://metagenomics.atc.tcs.com/SPHINX/.

  2. Advanced Techniques for Scene Analysis

    DTIC Science & Technology

    2010-06-01

    robustness prefers a bigger intergration window to handle larger motions. The advantage of pyramidal implementation is that, while each motion vector dL...labeled SAR images. Now the previous algorithm leads to a more dedicated classifier for the particular target; however, our algorithm trades generality for...accuracy is traded for generality. 7.3.2 I-RELIEF Feature weighting transforms the original feature vector x into a new feature vector x′ by assigning each

  3. Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.

    PubMed

    Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling

    2015-11-01

    In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.

  4. The Method for Assigning Priority Levels (MAPLe): A new decision-support system for allocating home care resources

    PubMed Central

    Hirdes, John P; Poss, Jeff W; Curtin-Telegdi, Nancy

    2008-01-01

    Background Home care plays a vital role in many health care systems, but there is evidence that appropriate targeting strategies must be used to allocate limited home care resources effectively. The aim of the present study was to develop and validate a methodology for prioritizing access to community and facility-based services for home care clients. Methods Canadian and international data based on the Resident Assessment Instrument – Home Care (RAI-HC) were analyzed to identify predictors for nursing home placement, caregiver distress and for being rated as requiring alternative placement to improve outlook. Results The Method for Assigning Priority Levels (MAPLe) algorithm was a strong predictor of all three outcomes in the derivation sample. The algorithm was validated with additional data from five other countries, three other provinces, and an Ontario sample obtained after the use of the RAI-HC was mandated. Conclusion The MAPLe algorithm provides a psychometrically sound decision-support tool that may be used to inform choices related to allocation of home care resources and prioritization of clients needing community or facility-based services. PMID:18366782

  5. Fuzzy Classification of Ocean Color Satellite Data for Bio-optical Algorithm Constituent Retrievals

    NASA Technical Reports Server (NTRS)

    Campbell, Janet W.

    1998-01-01

    The ocean has been traditionally viewed as a 2 class system. Morel and Prieur (1977) classified ocean water according to the dominant absorbent particle suspended in the water column. Case 1 is described as having a high concentration of phytoplankton (and detritus) relative to other particles. Conversely, case 2 is described as having inorganic particles such as suspended sediments in high concentrations. Little work has gone into the problem of mixing bio-optical models for these different water types. An approach is put forth here to blend bio-optical algorithms based on a fuzzy classification scheme. This scheme involves two procedures. First, a clustering procedure identifies classes and builds class statistics from in-situ optical measurements. Next, a classification procedure assigns satellite pixels partial memberships to these classes based on their ocean color reflectance signature. These membership assignments can be used as the basis for a weighting retrievals from class-specific bio-optical algorithms. This technique is demonstrated with in-situ optical measurements and an image from the SeaWiFS ocean color satellite.

  6. Implementation of the Hungarian Algorithm to Account for Ligand Symmetry and Similarity in Structure-Based Design

    PubMed Central

    2015-01-01

    False negative docking outcomes for highly symmetric molecules are a barrier to the accurate evaluation of docking programs, scoring functions, and protocols. This work describes an implementation of a symmetry-corrected root-mean-square deviation (RMSD) method into the program DOCK based on the Hungarian algorithm for solving the minimum assignment problem, which dynamically assigns atom correspondence in molecules with symmetry. The algorithm adds only a trivial amount of computation time to the RMSD calculations and is shown to increase the reported overall docking success rate by approximately 5% when tested over 1043 receptor–ligand systems. For some families of protein systems the results are even more dramatic, with success rate increases up to 16.7%. Several additional applications of the method are also presented including as a pairwise similarity metric to compare molecules during de novo design, as a scoring function to rank-order virtual screening results, and for the analysis of trajectories from molecular dynamics simulation. The new method, including source code, is available to registered users of DOCK6 (http://dock.compbio.ucsf.edu). PMID:24410429

  7. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  8. Single Machine Scheduling and Due Date Assignment with Past-Sequence-Dependent Setup Time and Position-Dependent Processing Time

    PubMed Central

    Zhao, Chuan-Li; Hsu, Hua-Feng

    2014-01-01

    This paper considers single machine scheduling and due date assignment with setup time. The setup time is proportional to the length of the already processed jobs; that is, the setup time is past-sequence-dependent (p-s-d). It is assumed that a job's processing time depends on its position in a sequence. The objective functions include total earliness, the weighted number of tardy jobs, and the cost of due date assignment. We analyze these problems with two different due date assignment methods. We first consider the model with job-dependent position effects. For each case, by converting the problem to a series of assignment problems, we proved that the problems can be solved in O(n 4) time. For the model with job-independent position effects, we proved that the problems can be solved in O(n 3) time by providing a dynamic programming algorithm. PMID:25258727

  9. Single machine scheduling and due date assignment with past-sequence-dependent setup time and position-dependent processing time.

    PubMed

    Zhao, Chuan-Li; Hsu, Chou-Jung; Hsu, Hua-Feng

    2014-01-01

    This paper considers single machine scheduling and due date assignment with setup time. The setup time is proportional to the length of the already processed jobs; that is, the setup time is past-sequence-dependent (p-s-d). It is assumed that a job's processing time depends on its position in a sequence. The objective functions include total earliness, the weighted number of tardy jobs, and the cost of due date assignment. We analyze these problems with two different due date assignment methods. We first consider the model with job-dependent position effects. For each case, by converting the problem to a series of assignment problems, we proved that the problems can be solved in O(n(4)) time. For the model with job-independent position effects, we proved that the problems can be solved in O(n(3)) time by providing a dynamic programming algorithm.

  10. Fleet Assignment Using Collective Intelligence

    NASA Technical Reports Server (NTRS)

    Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.

    2004-01-01

    Airline fleet assignment involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of an agent-based integer optimization algorithm to a "cold start" fleet assignment problem. Results show that the optimizer can successfully solve such highly- constrained problems (129 variables, 184 constraints).

  11. Cloud classification from satellite data using a fuzzy sets algorithm: A polar example

    NASA Technical Reports Server (NTRS)

    Key, J. R.; Maslanik, J. A.; Barry, R. G.

    1988-01-01

    Where spatial boundaries between phenomena are diffuse, classification methods which construct mutually exclusive clusters seem inappropriate. The Fuzzy c-means (FCM) algorithm assigns each observation to all clusters, with membership values as a function of distance to the cluster center. The FCM algorithm is applied to AVHRR data for the purpose of classifying polar clouds and surfaces. Careful analysis of the fuzzy sets can provide information on which spectral channels are best suited to the classification of particular features, and can help determine likely areas of misclassification. General agreement in the resulting classes and cloud fraction was found between the FCM algorithm, a manual classification, and an unsupervised maximum likelihood classifier.

  12. Packets Distributing Evolutionary Algorithm Based on PSO for Ad Hoc Network

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Feng

    2018-03-01

    Wireless communication network has such features as limited bandwidth, changeful channel and dynamic topology, etc. Ad hoc network has lots of difficulties in accessing control, bandwidth distribution, resource assign and congestion control. Therefore, a wireless packets distributing Evolutionary algorithm based on PSO (DPSO)for Ad Hoc Network is proposed. Firstly, parameters impact on performance of network are analyzed and researched to obtain network performance effective function. Secondly, the improved PSO Evolutionary Algorithm is used to solve the optimization problem from local to global in the process of network packets distributing. The simulation results show that the algorithm can ensure fairness and timeliness of network transmission, as well as improve ad hoc network resource integrated utilization efficiency.

  13. Nonrigid synthetic aperture radar and optical image coregistration by combining local rigid transformations using a Kohonen network.

    PubMed

    Salehpour, Mehdi; Behrad, Alireza

    2017-10-01

    This study proposes a new algorithm for nonrigid coregistration of synthetic aperture radar (SAR) and optical images. The proposed algorithm employs point features extracted by the binary robust invariant scalable keypoints algorithm and a new method called weighted bidirectional matching for initial correspondence. To refine false matches, we assume that the transformation between SAR and optical images is locally rigid. This property is used to refine false matches by assigning scores to matched pairs and clustering local rigid transformations using a two-layer Kohonen network. Finally, the thin plate spline algorithm and mutual information are used for nonrigid coregistration of SAR and optical images.

  14. Use of apparent thickness for preprocessing of low-frequency electromagnetic data in inversion-based multibarrier evaluation workflow

    NASA Astrophysics Data System (ADS)

    Omar, Saad; Omeragic, Dzevat

    2018-04-01

    The concept of apparent thicknesses is introduced for the inversion-based, multicasing evaluation interpretation workflow using multifrequency and multispacing electromagnetic measurements. A thickness value is assigned to each measurement, enabling the development of two new preprocessing algorithms to remove casing collar artifacts. First, long-spacing apparent thicknesses are used to remove, from the pipe sections, artifacts ("ghosts") caused by the transmitter crossing a casing collar or corrosion. Second, a collar identification, localization, and assignment algorithm is developed to enable robust inversion in collar sections. Last, casing eccentering can also be identified on the basis of opposite deviation of short-spacing phase and magnitude apparent thicknesses from the nominal value. The proposed workflow can handle an arbitrary number of nested casings and has been validated on synthetic and field data.

  15. Mobile Robot Designed with Autonomous Navigation System

    NASA Astrophysics Data System (ADS)

    An, Feng; Chen, Qiang; Zha, Yanfang; Tao, Wenyin

    2017-10-01

    With the rapid development of robot technology, robots appear more and more in all aspects of life and social production, people also ask more requirements for the robot, one is that robot capable of autonomous navigation, can recognize the road. Take the common household sweeping robot as an example, which could avoid obstacles, clean the ground and automatically find the charging place; Another example is AGV tracking car, which can following the route and reach the destination successfully. This paper introduces a new type of robot navigation scheme: SLAM, which can build the environment map in a totally strange environment, and at the same time, locate its own position, so as to achieve autonomous navigation function.

  16. Transmit Designs for the MIMO Broadcast Channel With Statistical CSI

    NASA Astrophysics Data System (ADS)

    Wu, Yongpeng; Jin, Shi; Gao, Xiqi; McKay, Matthew R.; Xiao, Chengshan

    2014-09-01

    We investigate the multiple-input multiple-output broadcast channel with statistical channel state information available at the transmitter. The so-called linear assignment operation is employed, and necessary conditions are derived for the optimal transmit design under general fading conditions. Based on this, we introduce an iterative algorithm to maximize the linear assignment weighted sum-rate by applying a gradient descent method. To reduce complexity, we derive an upper bound of the linear assignment achievable rate of each receiver, from which a simplified closed-form expression for a near-optimal linear assignment matrix is derived. This reveals an interesting construction analogous to that of dirty-paper coding. In light of this, a low complexity transmission scheme is provided. Numerical examples illustrate the significant performance of the proposed low complexity scheme.

  17. Frequency Assignments for HFDF Receivers in a Search and Rescue Network

    DTIC Science & Technology

    1990-03-01

    SAR problem where whether or not a signal is detected by RS or HFDF at the various stations is described by probabilities. Daskin assumes the...allows the problem to be formulated with a linear objective function (6:52-53). Daskin also developed a heuristic solution algorithm to solve this...en CM in o CM CM < I Q < - -.~- -^ * . . . ■ . ,■ . :ST.-.r . 5 Frequency Assignments for HFDF Receivers in a Search and

  18. The role of charity care and primary care physician assignment on ED use in homeless patients.

    PubMed

    Wang, Hao; Nejtek, Vicki A; Zieger, Dawn; Robinson, Richard D; Schrader, Chet D; Phariss, Chase; Ku, Jocelyn; Zenarosa, Nestor R

    2015-08-01

    Homeless patients are a vulnerable population with a higher incidence of using the emergency department (ED) for noncrisis care. Multiple charity programs target their outreach toward improving the health of homeless patients, but few data are available on the effectiveness of reducing ED recidivism. The aim of this study is to determine whether inappropriate ED use for nonemergency care may be reduced by providing charity insurance and assigning homeless patients to a primary care physician (PCP) in an outpatient clinic setting. A retrospective medical records review of homeless patients presenting to the ED and receiving treatment between July 2013 and June 2014 was completed. Appropriate vs inappropriate use of the ED was determined using the New York University ED Algorithm. The association between patients with charity care coverage, PCP assignment status, and appropriate vs inappropriate ED use was analyzed and compared. Following New York University ED Algorithm standards, 76% of all ED visits were deemed inappropriate with approximately 77% of homeless patients receiving charity care and 74% of patients with no insurance seeking noncrisis health care in the ED (P=.112). About 50% of inappropriate ED visits and 43.84% of appropriate ED visits occurred in patients with a PCP assignment (P=.019). Both charity care homeless patients and those without insurance coverage tend to use the ED for noncrisis care resulting in high rates of inappropriate ED use. Simply providing charity care and/or PCP assignment does not seem to sufficiently reduce inappropriate ED use in homeless patients. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms

    NASA Astrophysics Data System (ADS)

    Kwok, Kwan S.; Driessen, Brian J.; Phillips, Cynthia A.; Tovey, Craig A.

    1997-09-01

    This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a silicon graphics crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of network optimization methods, and we expect this to be a fruitful area for future research.

  20. An algorithm for the arithmetic classification of multilattices.

    PubMed

    Indelicato, Giuliana

    2013-01-01

    A procedure for the construction and the classification of monoatomic multilattices in arbitrary dimension is developed. The algorithm allows one to determine the location of the points of all monoatomic multilattices with a given symmetry, or to determine whether two assigned multilattices are arithmetically equivalent. This approach is based on ideas from integral matrix theory, in particular the reduction to the Smith normal form, and can be coded to provide a classification software package.

  1. A method of operation scheduling based on video transcoding for cluster equipment

    NASA Astrophysics Data System (ADS)

    Zhou, Haojie; Yan, Chun

    2018-04-01

    Because of the cluster technology in real-time video transcoding device, the application of facing the massive growth in the number of video assignments and resolution and bit rate of diversity, task scheduling algorithm, and analyze the current mainstream of cluster for real-time video transcoding equipment characteristics of the cluster, combination with the characteristics of the cluster equipment task delay scheduling algorithm is proposed. This algorithm enables the cluster to get better performance in the generation of the job queue and the lower part of the job queue when receiving the operation instruction. In the end, a small real-time video transcode cluster is constructed to analyze the calculation ability, running time, resource occupation and other aspects of various algorithms in operation scheduling. The experimental results show that compared with traditional clustering task scheduling algorithm, task delay scheduling algorithm has more flexible and efficient characteristics.

  2. Particle swarm optimization algorithm for optimizing assignment of blood in blood banking system.

    PubMed

    Olusanya, Micheal O; Arasomwan, Martins A; Adewumi, Aderemi O

    2015-01-01

    This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment.

  3. Testing the accuracy of redshift-space group-finding algorithms

    NASA Astrophysics Data System (ADS)

    Frederic, James J.

    1995-04-01

    Using simulated redshift surveys generated from a high-resolution N-body cosmological structure simulation, we study algorithms used to identify groups of galaxies in redshift space. Two algorithms are investigated; both are friends-of-friends schemes with variable linking lengths in the radial and transverse dimenisons. The chief difference between the algorithms is in the redshift linking length. The algorithm proposed by Huchra & Geller (1982) uses a generous linking length designed to find 'fingers of god,' while that of Nolthenius & White (1987) uses a smaller linking length to minimize contamination by projection. We find that neither of the algorithms studied is intrinsically superior to the other; rather, the ideal algorithm as well as the ideal algorithm parameters depends on the purpose for which groups are to be studied. The Huchra & Geller algorithm misses few real groups, at the cost of including some spurious groups and members, while the Nolthenius & White algorithm misses high velocity dispersion groups and members but is less likely to include interlopers in its group assignments. Adjusting the parameters of either algorithm results in a trade-off between group accuracy and completeness. In a companion paper we investigate the accuracy of virial mass estimates and clustering properties of groups identified using these algorithms.

  4. An Overview of Major Terrestrial, Celestial, and Temporal Coordinate Systems for Target Tracking

    DTIC Science & Technology

    2016-08-10

    interp and Subroutines) http://hpiers.obspm.fr/eop-pc/index.php?index=models General Software for Astronomy and Time Conversions The IAU’s Standards...of Fundamental Astronomy Software [146] http://www.iausofa.org Software for Optimal 2D Assignment An overview of 2D assignment algorithms; the... Astronomy (SOFA) library were used to change the epoch of the data. The points in red are at the epoch of the Hipparcos catalog (1994.25 TT), and 20

  5. Acute primary mesenteroaxial gastric volvulus in a 6 years old child; the contribution of ultrasonographic findings to the prompt diagnosis (a case report and review of the literature).

    PubMed

    Patoulias, Dimitrios; Rafailidis, Vasileios; Kalogirou, Maria; Farmakis, Konstantinos; Rafailidis, Dimitrios; Patoulias, Ioannis

    2017-01-01

    The aim of the present case study is to raise concern on the proper diagnostic approach of acute gastric volvulus (AGV) cases, in which, the key issue is the timely diagnosis and the prompt therapeutic intervention. After thorough and systematic research of the current literature, it is concluded that early diagnosis remains challenging, while there is no relevant publication with emphasis on the contribution of ultrasonography to the diagnostic documentation of AGV. A 6 years old boy was admitted to our Department due to repeatedly non bilious vomiting and food refusal during the last 72 hours before admission. Physical examination revealed the presence of a spherical, painful mass in the epigastrium, which did not recede a er placement of a nasogastric tube. Abdominal radiography showed the presence of a large gastric air bubble. Ultrasonography highlighted a distended and fluid-filled stomach, which was displaced in a cephalic position compared to esophagus and a pylorus pointing downward, in a cranial caudal orientation. Following barium meal examination confirmed the diagnosis of gastric volvulus. Patient underwent an urgent exploratory laparotomy, revealing the presence of acute mesenteroaxial gastric volvulus with a serosal ecchymosis in the major arc. After restoration of the gastric volvulus, thorough intraoperative investigation on the existence of a subject cause followed. Presence of relaxation of stomach's ligaments was finally documented. Fixation of the stomach' fundus to the diaphragm and anterior gastropexy were then conducted. Postoperative period was uneventful and the patient was discharged home on the 4th postoperative day. In conclusion, we believe that ultrasonography plays a significant role in the diagnostic approach of acute gastric volvulus, as it has the potential to detect findings suggestive of the diagnosis. Once the diagnosis is suspected on ultrasonography, contrast series should be performed, without further delay, in order to con rm the diagnosis.

  6. A Genetic Algorithm Approach to Door Assignment in Breakbulk Terminals

    DOT National Transportation Integrated Search

    2001-08-23

    Commercial vehicle regulation and enforcement is a necessary and important function of state governments. Through regulation, states promote highway safety, ensure that motor carriers have the proper licenses and operating permits, and collect taxes ...

  7. DESIGNING PROCESSES FOR ENVIRONMENTAL PROBLEMS

    EPA Science Inventory

    Designing for the environment requires consideration of environmental impacts. The Generalized WAR Algorithm is the methodology that allows the user to evaluate the potential environmental impact of the design of a chemical process. In this methodology, chemicals are assigned val...

  8. Virtual optical network mapping and core allocation in elastic optical networks using multi-core fibers

    NASA Astrophysics Data System (ADS)

    Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli

    2017-11-01

    Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.

  9. Constant Communities in Complex Networks

    NASA Astrophysics Data System (ADS)

    Chakraborty, Tanmoy; Srinivasan, Sriram; Ganguly, Niloy; Bhowmick, Sanjukta; Mukherjee, Animesh

    2013-05-01

    Identifying community structure is a fundamental problem in network analysis. Most community detection algorithms are based on optimizing a combinatorial parameter, for example modularity. This optimization is generally NP-hard, thus merely changing the vertex order can alter their assignments to the community. However, there has been less study on how vertex ordering influences the results of the community detection algorithms. Here we identify and study the properties of invariant groups of vertices (constant communities) whose assignment to communities are, quite remarkably, not affected by vertex ordering. The percentage of constant communities can vary across different applications and based on empirical results we propose metrics to evaluate these communities. Using constant communities as a pre-processing step, one can significantly reduce the variation of the results. Finally, we present a case study on phoneme network and illustrate that constant communities, quite strikingly, form the core functional units of the larger communities.

  10. Performance tradeoffs in static and dynamic load balancing strategies

    NASA Technical Reports Server (NTRS)

    Iqbal, M. A.; Saltz, J. H.; Bokhart, S. H.

    1986-01-01

    The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but sub-optimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.

  11. Random synaptic feedback weights support error backpropagation for deep learning

    NASA Astrophysics Data System (ADS)

    Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.

    2016-11-01

    The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.

  12. Hierarchical planning for a surface mounting machine placement.

    PubMed

    Zeng, You-jiao; Ma, Deng-ze; Jin, Ye; Yan, Jun-qi

    2004-11-01

    For a surface mounting machine (SMM) in printed circuit board (PCB) assembly line, there are four problems, e.g. CAD data conversion, nozzle selection, feeder assignment and placement sequence determination. A hierarchical planning for them to maximize the throughput rate of an SMM is presented here. To minimize set-up time, a CAD data conversion system was first applied that could automatically generate the data for machine placement from CAD design data files. Then an effective nozzle selection approach implemented to minimize the time of nozzle changing. And then, to minimize picking time, an algorithm for feeder assignment was used to make picking multiple components simultaneously as much as possible. Finally, in order to shorten pick-and-place time, a heuristic algorithm was used to determine optimal component placement sequence according to the decided feeder positions. Experiments were conducted on a four head SMM. The experimental results were used to analyse the assembly line performance.

  13. Random synaptic feedback weights support error backpropagation for deep learning

    PubMed Central

    Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.

    2016-01-01

    The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning. PMID:27824044

  14. Community Detection on the GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naim, Md; Manne, Fredrik; Halappanavar, Mahantesh

    We present and evaluate a new GPU algorithm based on the Louvain method for community detection. Our algorithm is the first for this problem that parallelizes the access to individual edges. In this way we can fine tune the load balance when processing networks with nodes of highly varying degrees. This is achieved by scaling the number of threads assigned to each node according to its degree. Extensive experiments show that we obtain speedups up to a factor of 270 compared to the sequential algorithm. The algorithm consistently outperforms other recent shared memory implementations and is only one order ofmore » magnitude slower than the current fastest parallel Louvain method running on a Blue Gene/Q supercomputer using more than 500K threads.« less

  15. DNATCO: assignment of DNA conformers at dnatco.org.

    PubMed

    Černý, Jiří; Božíková, Paulína; Schneider, Bohdan

    2016-07-08

    The web service DNATCO (dnatco.org) classifies local conformations of DNA molecules beyond their traditional sorting to A, B and Z DNA forms. DNATCO provides an interface to robust algorithms assigning conformation classes called NTC: to dinucleotides extracted from DNA-containing structures uploaded in PDB format version 3.1 or above. The assigned dinucleotide NTC: classes are further grouped into DNA structural alphabet NTA: , to the best of our knowledge the first DNA structural alphabet. The results are presented at two levels: in the form of user friendly visualization and analysis of the assignment, and in the form of a downloadable, more detailed table for further analysis offline. The website is free and open to all users and there is no login requirement. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. A Novel Dynamic Physical Layer Impairment-Aware Routing and Wavelength Assignment (PLI-RWA) Algorithm for Mixed Line Rate (MLR) Wavelength Division Multiplexed (WDM) Optical Networks

    NASA Astrophysics Data System (ADS)

    Iyer, Sridhar

    2016-12-01

    The ever-increasing global Internet traffic will inevitably lead to a serious upgrade of the current optical networks' capacity. The legacy infrastructure can be enhanced not only by increasing the capacity but also by adopting advance modulation formats, having increased spectral efficiency at higher data rate. In a transparent mixed-line-rate (MLR) optical network, different line rates, on different wavelengths, can coexist on the same fiber. Migration to data rates higher than 10 Gbps requires the implementation of phase modulation schemes. However, the co-existing on-off keying (OOK) channels cause critical physical layer impairments (PLIs) to the phase modulated channels, mainly due to cross-phase modulation (XPM), which in turn limits the network's performance. In order to mitigate this effect, a more sophisticated PLI-Routing and Wavelength Assignment (PLI-RWA) scheme needs to be adopted. In this paper, we investigate the critical impairment for each data rate and the way it affects the quality of transmission (QoT). In view of the aforementioned, we present a novel dynamic PLI-RWA algorithm for MLR optical networks. The proposed algorithm is compared through simulations with the shortest path and minimum hop routing schemes. The simulation results show that performance of the proposed algorithm is better than the existing schemes.

  17. Multi-Objectivising Combinatorial Optimisation Problems by Means of Elementary Landscape Decompositions.

    PubMed

    Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A

    2018-02-15

    In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.

  18. Towards Automated Structure-Based NMR Resonance Assignment

    NASA Astrophysics Data System (ADS)

    Jang, Richard; Gao, Xin; Li, Ming

    We propose a general framework for solving the structure-based NMR backbone resonance assignment problem. The core is a novel 0-1 integer programming model that can start from a complete or partial assignment, generate multiple assignments, and model not only the assignment of spins to residues, but also pairwise dependencies consisting of pairs of spins to pairs of residues. It is still a challenge for automated resonance assignment systems to perform the assignment directly from spectra without any manual intervention. To test the feasibility of this for structure-based assignment, we integrated our system with our automated peak picking and sequence-based resonance assignment system to obtain an assignment for the protein TM1112 with 91% recall and 99% precision without manual intervention. Since using a known structure has the potential to allow one to use only N-labeled NMR data and avoid the added expense of using C-labeled data, we work towards the goal of automated structure-based assignment using only such labeled data. Our system reduced the assignment error of Xiong-Pandurangan-Bailey-Kellogg's contact replacement (CR) method, which to our knowledge is the most error-tolerant method for this problem, by 5 folds on average. By using an iterative algorithm, our system has the added capability of using the NOESY data to correct assignment errors due to errors in predicting the amino acid and secondary structure type of each spin system. On a publicly available data set for Ubiquitin, where the type prediction accuracy is 83%, we achieved 91% assignment accuracy, compared to the 59% accuracy that was obtained without correcting for typing errors.

  19. Automated and assisted RNA resonance assignment using NMR chemical shift statistics

    PubMed Central

    Aeschbacher, Thomas; Schmidt, Elena; Blatter, Markus; Maris, Christophe; Duss, Olivier; Allain, Frédéric H.-T.; Güntert, Peter; Schubert, Mario

    2013-01-01

    The three-dimensional structure determination of RNAs by NMR spectroscopy relies on chemical shift assignment, which still constitutes a bottleneck. In order to develop more efficient assignment strategies, we analysed relationships between sequence and 1H and 13C chemical shifts. Statistics of resonances from regularly Watson–Crick base-paired RNA revealed highly characteristic chemical shift clusters. We developed two approaches using these statistics for chemical shift assignment of double-stranded RNA (dsRNA): a manual approach that yields starting points for resonance assignment and simplifies decision trees and an automated approach based on the recently introduced automated resonance assignment algorithm FLYA. Both strategies require only unlabeled RNAs and three 2D spectra for assigning the H2/C2, H5/C5, H6/C6, H8/C8 and H1′/C1′ chemical shifts. The manual approach proved to be efficient and robust when applied to the experimental data of RNAs with a size between 20 nt and 42 nt. The more advanced automated assignment approach was successfully applied to four stem-loop RNAs and a 42 nt siRNA, assigning 92–100% of the resonances from dsRNA regions correctly. This is the first automated approach for chemical shift assignment of non-exchangeable protons of RNA and their corresponding 13C resonances, which provides an important step toward automated structure determination of RNAs. PMID:23921634

  20. Parallel processing considerations for image recognition tasks

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.

    2011-01-01

    Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

  1. A model of cloud application assignments in software-defined storages

    NASA Astrophysics Data System (ADS)

    Bolodurina, Irina P.; Parfenov, Denis I.; Polezhaev, Petr N.; E Shukhman, Alexander

    2017-01-01

    The aim of this study is to analyze the structure and mechanisms of interaction of typical cloud applications and to suggest the approaches to optimize their placement in storage systems. In this paper, we describe a generalized model of cloud applications including the three basic layers: a model of application, a model of service, and a model of resource. The distinctive feature of the model suggested implies analyzing cloud resources from the user point of view and from the point of view of a software-defined infrastructure of the virtual data center (DC). The innovation character of this model is in describing at the same time the application data placements, as well as the state of the virtual environment, taking into account the network topology. The model of software-defined storage has been developed as a submodel within the resource model. This model allows implementing the algorithm for control of cloud application assignments in software-defined storages. Experimental researches returned this algorithm decreases in cloud application response time and performance growth in user request processes. The use of software-defined data storages allows the decrease in the number of physical store devices, which demonstrates the efficiency of our algorithm.

  2. A new mutually reinforcing network node and link ranking algorithm

    PubMed Central

    Wang, Zhenghua; Dueñas-Osorio, Leonardo; Padgett, Jamie E.

    2015-01-01

    This study proposes a novel Normalized Wide network Ranking algorithm (NWRank) that has the advantage of ranking nodes and links of a network simultaneously. This algorithm combines the mutual reinforcement feature of Hypertext Induced Topic Selection (HITS) and the weight normalization feature of PageRank. Relative weights are assigned to links based on the degree of the adjacent neighbors and the Betweenness Centrality instead of assigning the same weight to every link as assumed in PageRank. Numerical experiment results show that NWRank performs consistently better than HITS, PageRank, eigenvector centrality, and edge betweenness from the perspective of network connectivity and approximate network flow, which is also supported by comparisons with the expensive N-1 benchmark removal criteria based on network efficiency. Furthermore, it can avoid some problems, such as the Tightly Knit Community effect, which exists in HITS. NWRank provides a new inexpensive way to rank nodes and links of a network, which has practical applications, particularly to prioritize resource allocation for upgrade of hierarchical and distributed networks, as well as to support decision making in the design of networks, where node and link importance depend on a balance of local and global integrity. PMID:26492958

  3. A Comparison of Several Techniques to Assign Heights to Cloud Tracers.

    NASA Astrophysics Data System (ADS)

    Nieman, Steven J.; Schmetz, Johannes; Menzel, W. Paul

    1993-09-01

    Satellite-derived cloud-motion vector (CMV) production has been troubled by inaccurate height assignment of cloud tracers, especially in thin semitransparent clouds. This paper presents the results of an intercomparison of current operational height assignment techniques. Currently, heights are assigned by one of three techniques when the appropriate spectral radiance measurements are available. The infrared window (IRW) technique compares measured brightness temperatures to forecast temperature profiles and thus infers opaque cloud levels. In semitransparent or small subpixel clouds, the carbon dioxide (CO2) technique uses the ratio of radiances from different layers of the atmosphere to infer the correct cloud height. In the water vapor (H2O) technique, radiances influenced by upper-tropospheric moisture and IRW radiances are measured for several pixels viewing different cloud amounts, and their linear relationship is used to extrapolate the correct cloud height. The results presented in this paper suggest that the H2O technique is a viable alternative to the CO2 technique for inferring the heights of semitransparent cloud elements. This is important since future National Environmental Satellite, Data, and Information Service (NESDIS) operations will have to rely on H20-derived cloud-height assignments in the wind field determinations with the next operational geostationary satellite. On a given day, the heights from the two approaches compare to within 60 110 hPa rms; drier atmospheric conditions tend to reduce the effectiveness of the H2O technique. By inference one can conclude that the present height algorithms used operationally at NESDIS (with the C02 technique) and at the European Satellite Operations Center (ESOC) (with their version of the H20 technique) are providing similar results. Sample wind fields produced with the ESOC and NESDIS algorithms using Meteosat-4 data show good agreement.

  4. Storage assignment optimization in a multi-tier shuttle warehousing system

    NASA Astrophysics Data System (ADS)

    Wang, Yanyan; Mou, Shandong; Wu, Yaohua

    2016-03-01

    The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP), which has been widely applied in the conventional automated storage and retrieval system(AS/RS). However, the previous mathematical models in conventional AS/RS do not match multi-tier shuttle warehousing systems(MSWS) because the characteristics of parallel retrieval in multiple tiers and progressive vertical movement destroy the foundation of TSP. In this study, a two-stage open queuing network model in which shuttles and a lift are regarded as servers at different stages is proposed to analyze system performance in the terms of shuttle waiting period (SWP) and lift idle period (LIP) during transaction cycle time. A mean arrival time difference matrix for pairwise stock keeping units(SKUs) is presented to determine the mean waiting time and queue length to optimize the storage assignment problem on the basis of SKU correlation. The decomposition method is applied to analyze the interactions among outbound task time, SWP, and LIP. The ant colony clustering algorithm is designed to determine storage partitions using clustering items. In addition, goods are assigned for storage according to the rearranging permutation and the combination of storage partitions in a 2D plane. This combination is derived based on the analysis results of the queuing network model and on three basic principles. The storage assignment method and its entire optimization algorithm method as applied in a MSWS are verified through a practical engineering project conducted in the tobacco industry. The applying results show that the total SWP and LIP can be reduced effectively to improve the utilization rates of all devices and to increase the throughput of the distribution center.

  5. Automated mapping of pharmacy orders from two electronic health record systems to RxNorm within the STRIDE clinical data warehouse.

    PubMed

    Hernandez, Penni; Podchiyska, Tanya; Weber, Susan; Ferris, Todd; Lowe, Henry

    2009-11-14

    The Stanford Translational Research Integrated Database Environment (STRIDE) clinical data warehouse integrates medication information from two Stanford hospitals that use different drug representation systems. To merge this pharmacy data into a single, standards-based model supporting research we developed an algorithm to map HL7 pharmacy orders to RxNorm concepts. A formal evaluation of this algorithm on 1.5 million pharmacy orders showed that the system could accurately assign pharmacy orders in over 96% of cases. This paper describes the algorithm and discusses some of the causes of failures in mapping to RxNorm.

  6. Using recurrence plot analysis for software execution interpretation and fault detection

    NASA Astrophysics Data System (ADS)

    Mosdorf, M.

    2015-09-01

    This paper shows a method targeted at software execution interpretation and fault detection using recurrence plot analysis. In in the proposed approach recurrence plot analysis is applied to software execution trace that contains executed assembly instructions. Results of this analysis are subject to further processing with PCA (Principal Component Analysis) method that simplifies number coefficients used for software execution classification. This method was used for the analysis of five algorithms: Bubble Sort, Quick Sort, Median Filter, FIR, SHA-1. Results show that some of the collected traces could be easily assigned to particular algorithms (logs from Bubble Sort and FIR algorithms) while others are more difficult to distinguish.

  7. Online ranking by projecting.

    PubMed

    Crammer, Koby; Singer, Yoram

    2005-01-01

    We discuss the problem of ranking instances. In our framework, each instance is associated with a rank or a rating, which is an integer in 1 to k. Our goal is to find a rank-prediction rule that assigns each instance a rank that is as close as possible to the instance's true rank. We discuss a group of closely related online algorithms, analyze their performance in the mistake-bound model, and prove their correctness. We describe two sets of experiments, with synthetic data and with the EachMovie data set for collaborative filtering. In the experiments we performed, our algorithms outperform online algorithms for regression and classification applied to ranking.

  8. Computing Role Assignments of Proper Interval Graphs in Polynomial Time

    NASA Astrophysics Data System (ADS)

    Heggernes, Pinar; van't Hof, Pim; Paulusma, Daniël

    A homomorphism from a graph G to a graph R is locally surjective if its restriction to the neighborhood of each vertex of G is surjective. Such a homomorphism is also called an R-role assignment of G. Role assignments have applications in distributed computing, social network theory, and topological graph theory. The Role Assignment problem has as input a pair of graphs (G,R) and asks whether G has an R-role assignment. This problem is NP-complete already on input pairs (G,R) where R is a path on three vertices. So far, the only known non-trivial tractable case consists of input pairs (G,R) where G is a tree. We present a polynomial time algorithm that solves Role Assignment on all input pairs (G,R) where G is a proper interval graph. Thus we identify the first graph class other than trees on which the problem is tractable. As a complementary result, we show that the problem is Graph Isomorphism-hard on chordal graphs, a superclass of proper interval graphs and trees.

  9. Post-processing techniques to enhance reliability of assignment algorithm based performance measures.

    DOT National Transportation Integrated Search

    2011-01-01

    This study develops an enhanced transportation planning framework by augmenting the sequential four-step : planning process with post-processing techniques. The post-processing techniques are incorporated through a feedback : mechanism and aim to imp...

  10. Finding minimum-quotient cuts in planar graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J.K.; Phillips, C.A.

    Given a graph G = (V, E) where each vertex v {element_of} V is assigned a weight w(v) and each edge e {element_of} E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and {bar S} is c(S, {bar S})/min{l_brace}w(S), w(S){r_brace}, where c(S, {bar S}) is the sum of the costs of the edges crossing the cut and w(S) and w({bar S}) are the sum of the weights of the vertices in S and {bar S}, respectively. The problem of finding a cut whose quotient is minimum for a graph hasmore » in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,{bar S}) minimizing c(S,{bar S}) subject to the constraint bW {le} w(S) {le} (1 {minus} b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b {le} {1/2}. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao`s algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao`s most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less

  11. Finding minimum-quotient cuts in planar graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J.K.; Phillips, C.A.

    Given a graph G = (V, E) where each vertex v [element of] V is assigned a weight w(v) and each edge e [element of] E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and [bar S] is c(S, [bar S])/min[l brace]w(S), w(S)[r brace], where c(S, [bar S]) is the sum of the costs of the edges crossing the cut and w(S) and w([bar S]) are the sum of the weights of the vertices in S and [bar S], respectively. The problem of finding a cut whose quotient is minimummore » for a graph has in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,[bar S]) minimizing c(S,[bar S]) subject to the constraint bW [le] w(S) [le] (1 [minus] b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b [le] [1/2]. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao's algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao's most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less

  12. A modeling of dynamic storage assignment for order picking in beverage warehousing with Drive-in Rack system

    NASA Astrophysics Data System (ADS)

    Hadi, M. Z.; Djatna, T.; Sugiarto

    2018-04-01

    This paper develops a dynamic storage assignment model to solve storage assignment problem (SAP) for beverages order picking in a drive-in rack warehousing system to determine the appropriate storage location and space for each beverage products dynamically so that the performance of the system can be improved. This study constructs a graph model to represent drive-in rack storage position then combine association rules mining, class-based storage policies and an arrangement rule algorithm to determine an appropriate storage location and arrangement of the product according to dynamic orders from customers. The performance of the proposed model is measured as rule adjacency accuracy, travel distance (for picking process) and probability a product become expiry using Last Come First Serve (LCFS) queue approach. Finally, the proposed model is implemented through computer simulation and compare the performance for different storage assignment methods as well. The result indicates that the proposed model outperforms other storage assignment methods.

  13. Protein assignments without peak lists using higher-order spectra.

    PubMed

    Benison, Gregory; Berkholz, Donald S; Barbar, Elisar

    2007-12-01

    Despite advances in automating the generation and manipulation of peak lists for assigning biomolecules, there are well-known advantages to working directly with spectra: the eye is still superior to computer algorithms when it comes to picking out peak relationships from contour plots in the presence of confounding factors such as noise, overlap, and spectral artifacts. Here, we present constructs called higher-order spectra for identifying, through direct visual examination, many of the same relationships typically identified by searching peak lists, making them another addition to the set of tools (alongside peak picking and automated assignment) that can be used to solve the assignment problem. The technique is useful for searching for correlated peaks in any spectrum type. Application of this technique to novel, complete sequential assignment of two proteins (AhpFn and IC74(84-143)) is demonstrated. The program "burrow-owl" for the generation and display of higher-order spectra is available at (http://sourceforge.net/projects/burrow-owl) or from the authors.

  14. Analysis of low levels of rare earths by radiochemical neutron activation analysis

    USGS Publications Warehouse

    Wandless, G.A.; Morgan, J.W.

    1985-01-01

    A procedure for the radiochemical neutron-activation analysis for the rare earth elements (REE) involves the separation of the REE as a group by rapid ion-exchange methods and determination of yields by reactivation or by energy dispersive X-ray fluorescence (EDXRF) spectrometry. The U. S. Geological Survey (USGS) standard rocks, BCR-1 and AGV-1, were analyzed to determine the precision and accuracy of the method. We found that the precision was ??5-10% on the basis of replicate analysis and that, in general the accuracy was within ??5% of accepted values for most REE. Data for USGS standard rocks BIR-1 (Icelandic basalt) and DNC-1 (North Carolina diabase) are also presented. ?? 1985 Akade??miai Kiado??.

  15. Assignment of protein sequences to existing domain and family classification systems: Pfam and the PDB.

    PubMed

    Xu, Qifang; Dunbrack, Roland L

    2012-11-01

    Automating the assignment of existing domain and protein family classifications to new sets of sequences is an important task. Current methods often miss assignments because remote relationships fail to achieve statistical significance. Some assignments are not as long as the actual domain definitions because local alignment methods often cut alignments short. Long insertions in query sequences often erroneously result in two copies of the domain assigned to the query. Divergent repeat sequences in proteins are often missed. We have developed a multilevel procedure to produce nearly complete assignments of protein families of an existing classification system to a large set of sequences. We apply this to the task of assigning Pfam domains to sequences and structures in the Protein Data Bank (PDB). We found that HHsearch alignments frequently scored more remotely related Pfams in Pfam clans higher than closely related Pfams, thus, leading to erroneous assignment at the Pfam family level. A greedy algorithm allowing for partial overlaps was, thus, applied first to sequence/HMM alignments, then HMM-HMM alignments and then structure alignments, taking care to join partial alignments split by large insertions into single-domain assignments. Additional assignment of repeat Pfams with weaker E-values was allowed after stronger assignments of the repeat HMM. Our database of assignments, presented in a database called PDBfam, contains Pfams for 99.4% of chains >50 residues. The Pfam assignment data in PDBfam are available at http://dunbrack2.fccc.edu/ProtCid/PDBfam, which can be searched by PDB codes and Pfam identifiers. They will be updated regularly.

  16. A Linked-Cell Domain Decomposition Method for Molecular Dynamics Simulation on a Scalable Multiprocessor

    DOE PAGES

    Yang, L. H.; Brooks III, E. D.; Belak, J.

    1992-01-01

    A molecular dynamics algorithm for performing large-scale simulations using the Parallel C Preprocessor (PCP) programming paradigm on the BBN TC2000, a massively parallel computer, is discussed. The algorithm uses a linked-cell data structure to obtain the near neighbors of each atom as time evoles. Each processor is assigned to a geometric domain containing many subcells and the storage for that domain is private to the processor. Within this scheme, the interdomain (i.e., interprocessor) communication is minimized.

  17. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    PubMed

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  18. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    PubMed Central

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  19. Multiple Leader Candidate and Competitive Position Allocation for Robust Formation against Member Robot Faults

    PubMed Central

    Kwon, Ji-Wook; Kim, Jin Hyo; Seo, Jiwon

    2015-01-01

    This paper proposes a Multiple Leader Candidate (MLC) structure and a Competitive Position Allocation (CPA) algorithm which can be applicable for various applications including environmental sensing. Unlike previous formation structures such as virtual-leader and actual-leader structures with position allocation including a rigid allocation and an optimization based allocation, the formation employing the proposed MLC structure and CPA algorithm is robust against the fault (or disappearance) of the member robots and reduces the entire cost. In the MLC structure, a leader of the entire system is chosen among leader candidate robots. The CPA algorithm is the decentralized position allocation algorithm that assigns the robots to the vertex of the formation via the competition of the adjacent robots. The numerical simulations and experimental results are included to show the feasibility and the performance of the multiple robot system employing the proposed MLC structure and the CPA algorithm. PMID:25954956

  20. LensFlow: A Convolutional Neural Network in Search of Strong Gravitational Lenses

    NASA Astrophysics Data System (ADS)

    Pourrahmani, Milad; Nayyeri, Hooshang; Cooray, Asantha

    2018-03-01

    In this work, we present our machine learning classification algorithm for identifying strong gravitational lenses from wide-area surveys using convolutional neural networks; LENSFLOW. We train and test the algorithm using a wide variety of strong gravitational lens configurations from simulations of lensing events. Images are processed through multiple convolutional layers that extract feature maps necessary to assign a lens probability to each image. LENSFLOW provides a ranking scheme for all sources that could be used to identify potential gravitational lens candidates by significantly reducing the number of images that have to be visually inspected. We apply our algorithm to the HST/ACS i-band observations of the COSMOS field and present our sample of identified lensing candidates. The developed machine learning algorithm is more computationally efficient and complimentary to classical lens identification algorithms and is ideal for discovering such events across wide areas from current and future surveys such as LSST and WFIRST.

  1. Model Based Reconstruction of UT Array Data

    NASA Astrophysics Data System (ADS)

    Calmon, P.; Iakovleva, E.; Fidahoussen, A.; Ribay, G.; Chatillon, S.

    2008-02-01

    Beyond the detection of defects, their characterization (identification, positioning, sizing) is one goal of great importance often assigned to the analysis of NDT data. The first step of such analysis in the case of ultrasonic testing amounts to image in the part the detected echoes. This operation is in general achieved by considering time of flights and by applying simplified algorithms which are often valid only on canonical situations. In this communication we present an overview of different imaging techniques studied at CEA LIST and based on the exploitation of direct models which enable to address complex configurations and are available in the CIVA software plat-form. We discuss in particular ray-model based algorithms, algorithms derived from classical synthetic focusing and processing of the full inter-element matrix (MUSIC algorithm).

  2. A hybrid CS-SA intelligent approach to solve uncertain dynamic facility layout problems considering dependency of demands

    NASA Astrophysics Data System (ADS)

    Moslemipour, Ghorbanali

    2018-07-01

    This paper aims at proposing a quadratic assignment-based mathematical model to deal with the stochastic dynamic facility layout problem. In this problem, product demands are assumed to be dependent normally distributed random variables with known probability density function and covariance that change from period to period at random. To solve the proposed model, a novel hybrid intelligent algorithm is proposed by combining the simulated annealing and clonal selection algorithms. The proposed model and the hybrid algorithm are verified and validated using design of experiment and benchmark methods. The results show that the hybrid algorithm has an outstanding performance from both solution quality and computational time points of view. Besides, the proposed model can be used in both of the stochastic and deterministic situations.

  3. On the problem of resonance assignments in solid state NMR of uniformly 15N, 13C-labeled proteins

    NASA Astrophysics Data System (ADS)

    Tycko, Robert

    2015-04-01

    Determination of accurate resonance assignments from multidimensional chemical shift correlation spectra is one of the major problems in biomolecular solid state NMR, particularly for relative large proteins with less-than-ideal NMR linewidths. This article investigates the difficulty of resonance assignment, using a computational Monte Carlo/simulated annealing (MCSA) algorithm to search for assignments from artificial three-dimensional spectra that are constructed from the reported isotropic 15N and 13C chemical shifts of two proteins whose structures have been determined by solution NMR methods. The results demonstrate how assignment simulations can provide new insights into factors that affect the assignment process, which can then help guide the design of experimental strategies. Specifically, simulations are performed for the catalytic domain of SrtC (147 residues, primarily β-sheet secondary structure) and the N-terminal domain of MLKL (166 residues, primarily α-helical secondary structure). Assuming unambiguous residue-type assignments and four ideal three-dimensional data sets (NCACX, NCOCX, CONCA, and CANCA), uncertainties in chemical shifts must be less than 0.4 ppm for assignments for SrtC to be unique, and less than 0.2 ppm for MLKL. Eliminating CANCA data has no significant effect, but additionally eliminating CONCA data leads to more stringent requirements for chemical shift precision. Introducing moderate ambiguities in residue-type assignments does not have a significant effect.

  4. PROBABILISTIC CROSS-IDENTIFICATION IN CROWDED FIELDS AS AN ASSIGNMENT PROBLEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budavári, Tamás; Basu, Amitabh, E-mail: budavari@jhu.edu, E-mail: basu.amitabh@jhu.edu

    2016-10-01

    One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.

  5. Shaped-Based Recognition of 3D Objects From 2D Projections

    DTIC Science & Technology

    2006-12-01

    functions for a typical minimization by the graduated assignment algorithm. (The solid line is E , which uses the Euclid- ean distances to the nearest...of E and E0 generally decrease during the optimiza- tion process, but they can also rise because of changes in the assignment variables Mjk...m+ 1)× (n+ 1) match matrix M that minimizes the objective function E = mX j=1 nX k=1 Mjk ³ d (T (lj) , l 0 k) 2 − δ2 ´ . (7) M defines the

  6. Probabilistic Cross-identification in Crowded Fields as an Assignment Problem

    NASA Astrophysics Data System (ADS)

    Budavári, Tamás; Basu, Amitabh

    2016-10-01

    One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.

  7. MassSieve: Panning MS/MS peptide data for proteins

    PubMed Central

    Slotta, Douglas J.; McFarland, Melinda A.; Markey, Sanford P.

    2010-01-01

    We present MassSieve, a Java-based platform for visualization and parsimony analysis of single and comparative LC-MS/MS database search engine results. The success of mass spectrometric peptide sequence assignment algorithms has led to the need for a tool to merge and evaluate the increasing data set sizes that result from LC-MS/MS-based shotgun proteomic experiments. MassSieve supports reports from multiple search engines with differing search characteristics, which can increase peptide sequence coverage and/or identify conflicting or ambiguous spectral assignments. PMID:20564260

  8. Self-adaptive multi-objective harmony search for optimal design of water distribution networks

    NASA Astrophysics Data System (ADS)

    Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon

    2017-11-01

    In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.

  9. Peak picking multidimensional NMR spectra with the contour geometry based algorithm CYPICK.

    PubMed

    Würz, Julia M; Güntert, Peter

    2017-01-01

    The automated identification of signals in multidimensional NMR spectra is a challenging task, complicated by signal overlap, noise, and spectral artifacts, for which no universally accepted method is available. Here, we present a new peak picking algorithm, CYPICK, that follows, as far as possible, the manual approach taken by a spectroscopist who analyzes peak patterns in contour plots of the spectrum, but is fully automated. Human visual inspection is replaced by the evaluation of geometric criteria applied to contour lines, such as local extremality, approximate circularity (after appropriate scaling of the spectrum axes), and convexity. The performance of CYPICK was evaluated for a variety of spectra from different proteins by systematic comparison with peak lists obtained by other, manual or automated, peak picking methods, as well as by analyzing the results of automated chemical shift assignment and structure calculation based on input peak lists from CYPICK. The results show that CYPICK yielded peak lists that compare in most cases favorably to those obtained by other automated peak pickers with respect to the criteria of finding a maximal number of real signals, a minimal number of artifact peaks, and maximal correctness of the chemical shift assignments and the three-dimensional structure obtained by fully automated assignment and structure calculation.

  10. Relevance feedback for CBIR: a new approach based on probabilistic feature weighting with positive and negative examples.

    PubMed

    Kherfi, Mohammed Lamine; Ziou, Djemel

    2006-04-01

    In content-based image retrieval, understanding the user's needs is a challenging task that requires integrating him in the process of retrieval. Relevance feedback (RF) has proven to be an effective tool for taking the user's judgement into account. In this paper, we present a new RF framework based on a feature selection algorithm that nicely combines the advantages of a probabilistic formulation with those of using both the positive example (PE) and the negative example (NE). Through interaction with the user, our algorithm learns the importance he assigns to image features, and then applies the results obtained to define similarity measures that correspond better to his judgement. The use of the NE allows images undesired by the user to be discarded, thereby improving retrieval accuracy. As for the probabilistic formulation of the problem, it presents a multitude of advantages and opens the door to more modeling possibilities that achieve a good feature selection. It makes it possible to cluster the query data into classes, choose the probability law that best models each class, model missing data, and support queries with multiple PE and/or NE classes. The basic principle of our algorithm is to assign more importance to features with a high likelihood and those which distinguish well between PE classes and NE classes. The proposed algorithm was validated separately and in image retrieval context, and the experiments show that it performs a good feature selection and contributes to improving retrieval effectiveness.

  11. Algorithm of OMA for large-scale orthology inference

    PubMed Central

    Roth, Alexander CJ; Gonnet, Gaston H; Dessimoz, Christophe

    2008-01-01

    Background OMA is a project that aims to identify orthologs within publicly available, complete genomes. With 657 genomes analyzed to date, OMA is one of the largest projects of its kind. Results The algorithm of OMA improves upon standard bidirectional best-hit approach in several respects: it uses evolutionary distances instead of scores, considers distance inference uncertainty, includes many-to-many orthologous relations, and accounts for differential gene losses. Herein, we describe in detail the algorithm for inference of orthology and provide the rationale for parameter selection through multiple tests. Conclusion OMA contains several novel improvement ideas for orthology inference and provides a unique dataset of large-scale orthology assignments. PMID:19055798

  12. Identification of structural domains in proteins by a graph heuristic.

    PubMed

    Wernisch, L; Hunting, M; Wodak, S J

    1999-05-15

    A novel automatic procedure for identifying domains from protein atomic coordinates is presented. The procedure, termed STRUDL (STRUctural Domain Limits), does not take into account information on secondary structures and handles any number of domains made up of contiguous or non-contiguous chain segments. The core algorithm uses the Kernighan-Lin graph heuristic to partition the protein into residue sets which display minimum interactions between them. These interactions are deduced from the weighted Voronoi diagram. The generated partitions are accepted or rejected on the basis of optimized criteria, representing basic expected physical properties of structural domains. The graph heuristic approach is shown to be very effective, it approximates closely the exact solution provided by a branch and bound algorithm for a number of test proteins. In addition, the overall performance of STRUDL is assessed on a set of 787 representative proteins from the Protein Data Bank by comparison to domain definitions in the CATH protein classification. The domains assigned by STRUDL agree with the CATH assignments in at least 81% of the tested proteins. This result is comparable to that obtained previously using PUU (Holm and Sander, Proteins 1994;9:256-268), the only other available algorithm designed to identify domains with any number of non-contiguous chain segments. A detailed discussion of the structures for which our assignments differ from those in CATH brings to light some clear inconsistencies between the concept of structural domains based on minimizing inter-domain interactions and that of delimiting structural motifs that represent acceptable folding topologies or architectures. Considering both concepts as complementary and combining them in a layered approach might be the way forward.

  13. GASP: Gapped Ancestral Sequence Prediction for proteins

    PubMed Central

    Edwards, Richard J; Shields, Denis C

    2004-01-01

    Background The prediction of ancestral protein sequences from multiple sequence alignments is useful for many bioinformatics analyses. Predicting ancestral sequences is not a simple procedure and relies on accurate alignments and phylogenies. Several algorithms exist based on Maximum Parsimony or Maximum Likelihood methods but many current implementations are unable to process residues with gaps, which may represent insertion/deletion (indel) events or sequence fragments. Results Here we present a new algorithm, GASP (Gapped Ancestral Sequence Prediction), for predicting ancestral sequences from phylogenetic trees and the corresponding multiple sequence alignments. Alignments may be of any size and contain gaps. GASP first assigns the positions of gaps in the phylogeny before using a likelihood-based approach centred on amino acid substitution matrices to assign ancestral amino acids. Important outgroup information is used by first working down from the tips of the tree to the root, using descendant data only to assign probabilities, and then working back up from the root to the tips using descendant and outgroup data to make predictions. GASP was tested on a number of simulated datasets based on real phylogenies. Prediction accuracy for ungapped data was similar to three alternative algorithms tested, with GASP performing better in some cases and worse in others. Adding simple insertions and deletions to the simulated data did not have a detrimental effect on GASP accuracy. Conclusions GASP (Gapped Ancestral Sequence Prediction) will predict ancestral sequences from multiple protein alignments of any size. Although not as accurate in all cases as some of the more sophisticated maximum likelihood approaches, it can process a wide range of input phylogenies and will predict ancestral sequences for gapped and ungapped residues alike. PMID:15350199

  14. Performance of the Tariff Method: validation of a simple additive algorithm for analysis of verbal autopsies

    PubMed Central

    2011-01-01

    Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff), which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs) from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score) provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science. PMID:21816107

  15. Evaluation of the Jonker-Volgenant-Castanon (JVC) assignment algorithm for track association

    NASA Astrophysics Data System (ADS)

    Malkoff, Donald B.

    1997-07-01

    The Jonker-Volgenant-Castanon (JVC) assignment algorithm was used by Lockheed Martin Advanced Technology Laboratories (ATL) for track association in the Rotorcraft Pilot's Associate (RPA) program. RPA is Army Aviation's largest science and technology program, involving an integrated hardware/software system approach for a next generation helicopter containing advanced sensor equipments and applying artificial intelligence `associate' technologies. ATL is responsible for the multisensor, multitarget, onboard/offboard track fusion. McDonnell Douglas Helicopter Systems is the prime contractor and Lockheed Martin Federal Systems is responsible for developing much of the cognitive decision aiding and controls-and-displays subsystems. RPA is scheduled for flight testing beginning in 1997. RPA is unique in requiring real-time tracking and fusion for large numbers of highly-maneuverable ground (and air) targets in a target-dense environment. It uses diverse sensors and is concerned with a large area of interest. Target class and identification data is tightly integrated with spatial and kinematic data throughout the processing. Because of platform constraints, processing hardware for track fusion was quite limited. No previous experience using JVC in this type environment had been reported. ATL performed extensive testing of the JVC, concentrating on error rates and run- times under a variety of conditions. These included wide ranging numbers and types of targets, sensor uncertainties, target attributes, differing degrees of target maneuverability, and diverse combinations of sensors. Testing utilized Monte Carlo approaches, as well as many kinds of challenging scenarios. Comparisons were made with a nearest-neighbor algorithm and a new, proprietary algorithm (the `Competition' algorithm). The JVC proved to be an excellent choice for the RPA environment, providing a good balance between speed of operation and accuracy of results.

  16. Assignment of protein sequences to existing domain and family classification systems: Pfam and the PDB

    PubMed Central

    Dunbrack, Roland L.

    2012-01-01

    Motivation: Automating the assignment of existing domain and protein family classifications to new sets of sequences is an important task. Current methods often miss assignments because remote relationships fail to achieve statistical significance. Some assignments are not as long as the actual domain definitions because local alignment methods often cut alignments short. Long insertions in query sequences often erroneously result in two copies of the domain assigned to the query. Divergent repeat sequences in proteins are often missed. Results: We have developed a multilevel procedure to produce nearly complete assignments of protein families of an existing classification system to a large set of sequences. We apply this to the task of assigning Pfam domains to sequences and structures in the Protein Data Bank (PDB). We found that HHsearch alignments frequently scored more remotely related Pfams in Pfam clans higher than closely related Pfams, thus, leading to erroneous assignment at the Pfam family level. A greedy algorithm allowing for partial overlaps was, thus, applied first to sequence/HMM alignments, then HMM–HMM alignments and then structure alignments, taking care to join partial alignments split by large insertions into single-domain assignments. Additional assignment of repeat Pfams with weaker E-values was allowed after stronger assignments of the repeat HMM. Our database of assignments, presented in a database called PDBfam, contains Pfams for 99.4% of chains >50 residues. Availability: The Pfam assignment data in PDBfam are available at http://dunbrack2.fccc.edu/ProtCid/PDBfam, which can be searched by PDB codes and Pfam identifiers. They will be updated regularly. Contact: Roland.Dunbracks@fccc.edu PMID:22942020

  17. Applications of random forest feature selection for fine-scale genetic population assignment.

    PubMed

    Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G

    2018-02-01

    Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.

  18. Methods Development for Spectral Simplification of Room-Temperature Rotational Spectra

    NASA Astrophysics Data System (ADS)

    Kent, Erin B.; Shipman, Steven

    2014-06-01

    Room-temperature rotational spectra are dense and difficult to assign, and so we have been working to develop methods to accelerate this process. We have tested two different methods with our waveguide-based spectrometer, which operates from 8.7 to 26.5 GHz. The first method, based on previous work by Medvedev and De Lucia, was used to estimate lower state energies of transitions by performing relative intensity measurements at a range of temperatures between -20 and +50 °C. The second method employed hundreds of microwave-microwave double resonance measurements to determine level connectivity between rotational transitions. The relative intensity measurements were not particularly successful in this frequency range (the reasons for this will be discussed), but the information gleaned from the double-resonance measurements can be incorporated into other spectral search algorithms (such as autofit or genetic algorithm approaches) via scoring or penalty functions to help with the spectral assignment process. I.R. Medvedev, F.C. De Lucia, Astrophys. J. 656, 621-628 (2007).

  19. An objective alternative to IUPAC's approach to assign oxidation states.

    PubMed

    Postils, Verònica; Delgado-Alonso, Carlos; Luis, Josep M; Salvador, Pedro

    2018-05-22

    The IUPAC has recently clarified the term Oxidation State (OS), and provided algorithms for its determination based on the ionic approximation (IA) of the bonds supported by atomic electronegativities (EN). Unfortunately, there are a number of exceptions and ambiguities in IUPAC's algorithms when it comes to practical applications. Our comprehensive study reveals the critical role of the chemical environment on establishing the OS, which cannot always be properly predicted using fix atomic EN values. By identifying what we define here as subsystems of enhanced stability within the molecular system, OS can be safely assigned in many cases without invoking exceptions. New insights about the effect of local aromaticity upon OS are revealed. Moreover, we prove that there are intrinsic limitations of the IA that cannot be overcome. In this context, the effective oxidation state (EOS) analysis arises as a robust and general scheme to derive OS without any external guidance. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Due Date Assignment in a Dynamic Job Shop with the Orthogonal Kernel Least Squares Algorithm

    NASA Astrophysics Data System (ADS)

    Yang, D. H.; Hu, L.; Qian, Y.

    2017-06-01

    Meeting due dates is a key goal in the manufacturing industries. This paper proposes a method for due date assignment (DDA) by using the Orthogonal Kernel Least Squares Algorithm (OKLSA). A simulation model is built to imitate the production process of a highly dynamic job shop. Several factors describing job characteristics and system state are extracted as attributes to predict job flow-times. A number of experiments under conditions of varying dispatching rules and 90% shop utilization level have been carried out to evaluate the effectiveness of OKLSA applied for DDA. The prediction performance of OKLSA is compared with those of five conventional DDA models and back-propagation neural network (BPNN). The experimental results indicate that OKLSA is statistically superior to other DDA models in terms of mean absolute lateness and root mean squares lateness in most cases. The only exception occurs when the shortest processing time rule is used for dispatching jobs, the difference between OKLSA and BPNN is not statistically significant.

  1. Optimal erasure protection for scalably compressed video streams with limited retransmission.

    PubMed

    Taubman, David; Thie, Johnson

    2005-08-01

    This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.

  2. High-precision spatial localization of mouse vocalizations during social interaction.

    PubMed

    Heckman, Jesse J; Proville, Rémi; Heckman, Gert J; Azarfar, Alireza; Celikel, Tansu; Englitz, Bernhard

    2017-06-07

    Mice display a wide repertoire of vocalizations that varies with age, sex, and context. Especially during courtship, mice emit ultrasonic vocalizations (USVs) of high complexity, whose detailed structure is poorly understood. As animals of both sexes vocalize, the study of social vocalizations requires attributing single USVs to individuals. The state-of-the-art in sound localization for USVs allows spatial localization at centimeter resolution, however, animals interact at closer ranges, involving tactile, snout-snout exploration. Hence, improved algorithms are required to reliably assign USVs. We develop multiple solutions to USV localization, and derive an analytical solution for arbitrary vertical microphone positions. The algorithms are compared on wideband acoustic noise and single mouse vocalizations, and applied to social interactions with optically tracked mouse positions. A novel, (frequency) envelope weighted generalised cross-correlation outperforms classical cross-correlation techniques. It achieves a median error of ~1.4 mm for noise and ~4-8.5 mm for vocalizations. Using this algorithms in combination with a level criterion, we can improve the assignment for interacting mice. We report significant differences in mean USV properties between CBA mice of different sexes during social interaction. Hence, the improved USV attribution to individuals lays the basis for a deeper understanding of social vocalizations, in particular sequences of USVs.

  3. Exploring the Pareto frontier using multisexual evolutionary algorithms: an application to a flexible manufacturing problem

    NASA Astrophysics Data System (ADS)

    Bonissone, Stefano R.; Subbu, Raj

    2002-12-01

    In multi-objective optimization (MOO) problems we need to optimize many possibly conflicting objectives. For instance, in manufacturing planning we might want to minimize the cost and production time while maximizing the product's quality. We propose the use of evolutionary algorithms (EAs) to solve these problems. Solutions are represented as individuals in a population and are assigned scores according to a fitness function that determines their relative quality. Strong solutions are selected for reproduction, and pass their genetic material to the next generation. Weak solutions are removed from the population. The fitness function evaluates each solution and returns a related score. In MOO problems, this fitness function is vector-valued, i.e. it returns a value for each objective. Therefore, instead of a global optimum, we try to find the Pareto-optimal or non-dominated frontier. We use multi-sexual EAs with as many genders as optimization criteria. We have created new crossover and gender assignment functions, and experimented with various parameters to determine the best setting (yielding the highest number of non-dominated solutions.) These experiments are conducted using a variety of fitness functions, and the algorithms are later evaluated on a flexible manufacturing problem with total cost and time minimization objectives.

  4. Optimizing Approximate Weighted Matching on Nvidia Kepler K40

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naim, Md; Manne, Fredrik; Halappanavar, Mahantesh

    Matching is a fundamental graph problem with numerous applications in science and engineering. While algorithms for computing optimal matchings are difficult to parallelize, approximation algorithms on the other hand generally compute high quality solutions and are amenable to parallelization. In this paper, we present efficient implementations of the current best algorithm for half-approximate weighted matching, the Suitor algorithm, on Nvidia Kepler K-40 platform. We develop four variants of the algorithm that exploit hardware features to address key challenges for a GPU implementation. We also experiment with different combinations of work assigned to a warp. Using an exhaustive set ofmore » $269$ inputs, we demonstrate that the new implementation outperforms the previous best GPU algorithm by $10$ to $$100\\times$$ for over $100$ instances, and from $100$ to $$1000\\times$$ for $15$ instances. We also demonstrate up to $$20\\times$$ speedup relative to $2$ threads, and up to $$5\\times$$ relative to $16$ threads on Intel Xeon platform with $16$ cores for the same algorithm. The new algorithms and implementations provided in this paper will have a direct impact on several applications that repeatedly use matching as a key compute kernel. Further, algorithm designs and insights provided in this paper will benefit other researchers implementing graph algorithms on modern GPU architectures.« less

  5. Fluidity models in ancient Greece and current practices of sex assignment

    PubMed Central

    Chen, Min-Jye; McCann-Crosby, Bonnie; Gunn, Sheila; Georgiadis, Paraskevi; Placencia, Frank; Mann, David; Axelrad, Marni; Karaviti, L.P; McCullough, Laurence B.

    2018-01-01

    Disorders of sexual differentiation such as androgen insensitivity and gonadal dysgenesis can involve an intrinsic fluidity at different levels, from the anatomical and biological to the social (gender) that must be considered in the context of social constraints. Sex assignment models based on George Engel’s biopsychosocial aspects model of biology accept fluidity of gender as a central concept and therefore help establish expectations within the uncertainty of sex assignment and anticipate potential changes. The biology underlying the fluidity inherent to these disorders should be presented to parents at diagnosis, an approach that the gender medicine field should embrace as good practice. Greek mythology provides many accepted archetypes of change, and the ancient Greek appreciation of metamorphosis can be used as context with these patients. Our goal is to inform expertise and optimal approaches, knowing that this fluidity may eventually necessitate sex reassignment. Physicians should provide sex assignment education based on different components of sexual differentiation, prepare parents for future hormone-triggered changes in their children, and establish a sex-assignment algorithm. PMID:28478088

  6. Fluidity models in ancient Greece and current practices of sex assignment.

    PubMed

    Chen, Min-Jye; McCann-Crosby, Bonnie; Gunn, Sheila; Georgiadis, Paraskevi; Placencia, Frank; Mann, David; Axelrad, Marni; Karaviti, L P; McCullough, Laurence B

    2017-06-01

    Disorders of sexual differentiation such as androgen insensitivity and gonadal dysgenesis can involve an intrinsic fluidity at different levels, from the anatomical and biological to the social (gender) that must be considered in the context of social constraints. Sex assignment models based on George Engel's biopsychosocial aspects model of biology accept fluidity of gender as a central concept and therefore help establish expectations within the uncertainty of sex assignment and anticipate potential changes. The biology underlying the fluidity inherent to these disorders should be presented to parents at diagnosis, an approach that the gender medicine field should embrace as good practice. Greek mythology provides many accepted archetypes of change, and the ancient Greek appreciation of metamorphosis can be used as context with these patients. Our goal is to inform expertise and optimal approaches, knowing that this fluidity may eventually necessitate sex reassignment. Physicians should provide sex assignment education based on different components of sexual differentiation, prepare parents for future hormone-triggered changes in their children, and establish a sex-assignment algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Topological numbering of features on a mesh

    NASA Technical Reports Server (NTRS)

    Atallah, Mikhail J.; Hambrusch, Susanne E.; Tewinkel, Lynn E.

    1988-01-01

    Assume a nxn binary image is given containing horizontally convex features; i.e., for each feature, each of its row's pixels form an interval on that row. The problem of assigning topological numbers to such features is considered; i.e., assign a number to every feature f so that all features to the left of f have a smaller number assigned to them. This problem arises in solutions to the stereo matching problem. A parallel algorithm to solve the topological numbering problem in O(n) time on an nxn mesh of processors is presented. The key idea of the solution is to create a tree from which the topological numbers can be obtained even though the tree does not uniquely represent the to the left of relationship of the features.

  8. VizieR Online Data Catalog: Proper motions of PM2000 open clusters (Krone-Martins+, 2010)

    NASA Astrophysics Data System (ADS)

    Krone-Martins, A.; Soubiran, C.; Ducourant, C.; Teixeira, R.; Le Campion, J. F.

    2010-04-01

    We present lists of proper-motions and kinematic membership probabilities in the region of 49 open clusters or possible open clusters. The stellar proper motions were taken from the Bordeaux PM2000 catalogue. The segregation between cluster and field stars and the assignment of membership probabilities was accomplished by applying a fully automated method based on parametrisations for the probability distribution functions and genetic algorithm optimisation heuristics associated with a derivative-based hill climbing algorithm for the likelihood optimization. (3 data files).

  9. Probing the space of toric quiver theories

    NASA Astrophysics Data System (ADS)

    Hewlett, Joseph; He, Yang-Hui

    2010-03-01

    We demonstrate a practical and efficient method for generating toric Calabi-Yau quiver theories, applicable to both D3 and M2 brane world-volume physics. A new analytic method is presented at low order parametres and an algorithm for the general case is developed which has polynomial complexity in the number of edges in the quiver. Using this algorithm, carefully implemented, we classify the quiver diagram and assign possible superpotentials for various small values of the number of edges and nodes. We examine some preliminary statistics on this space of toric quiver theories.

  10. A decoupled recursive approach for constrained flexible multibody system dynamics

    NASA Technical Reports Server (NTRS)

    Lai, Hao-Jan; Kim, Sung-Soo; Haug, Edward J.; Bae, Dae-Sung

    1989-01-01

    A variational-vector calculus approach is employed to derive a recursive formulation for dynamic analysis of flexible multibody systems. Kinematic relationships for adjacent flexible bodies are derived in a companion paper, using a state vector notation that represents translational and rotational components simultaneously. Cartesian generalized coordinates are assigned for all body and joint reference frames, to explicitly formulate deformation kinematics under small deformation kinematics and an efficient flexible dynamics recursive algorithm is developed. Dynamic analysis of a closed loop robot is performed to illustrate efficiency of the algorithm.

  11. Design principles and algorithms for automated air traffic management

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    1995-01-01

    This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.

  12. Using ant colony optimization on the quadratic assignment problem to achieve low energy cost in geo-distributed data centers

    NASA Astrophysics Data System (ADS)

    Osei, Richard

    There are many problems associated with operating a data center. Some of these problems include data security, system performance, increasing infrastructure complexity, increasing storage utilization, keeping up with data growth, and increasing energy costs. Energy cost differs by location, and at most locations fluctuates over time. The rising cost of energy makes it harder for data centers to function properly and provide a good quality of service. With reduced energy cost, data centers will have longer lasting servers/equipment, higher availability of resources, better quality of service, a greener environment, and reduced service and software costs for consumers. Some of the ways that data centers have tried to using to reduce energy costs include dynamically switching on and off servers based on the number of users and some predefined conditions, the use of environmental monitoring sensors, and the use of dynamic voltage and frequency scaling (DVFS), which enables processors to run at different combinations of frequencies with voltages to reduce energy cost. This thesis presents another method by which energy cost at data centers could be reduced. This method involves the use of Ant Colony Optimization (ACO) on a Quadratic Assignment Problem (QAP) in assigning user request to servers in geo-distributed data centers. In this paper, an effort to reduce data center energy cost involves the use of front portals, which handle users' requests, were used as ants to find cost effective ways to assign users requests to a server in heterogeneous geo-distributed data centers. The simulation results indicate that the ACO for Optimal Server Activation and Task Placement algorithm reduces energy cost on a small and large number of users' requests in a geo-distributed data center and its performance increases as the input data grows. In a simulation with 3 geo-distributed data centers, and user's resource request ranging from 25,000 to 25,000,000, the ACO algorithm was able to reduce energy cost on an average of $.70 per second. The ACO for Optimal Server Activation and Task Placement algorithm has proven to work as an alternative or improvement in reducing energy cost in geo-distributed data centers.

  13. Development of multi-class, multi-criteria bicycle traffic assignment models and solution algorithms

    DOT National Transportation Integrated Search

    2015-08-31

    Cycling is gaining popularity both as a mode of travel in urban communities and as an alternative mode to private motorized vehicles due to its wide range of benefits (health, environmental, and economical). However, this change in modal share is not...

  14. A Preserved Context Indexing System for Microcomputers: PERMDEX.

    ERIC Educational Resources Information Center

    Yerkey, A. Neil

    1983-01-01

    Following a discussion of derivative versus assignment indexing, use of roles, and concept behind Preserved Concept Indexing System, features of PERMDEX (microcomputer program to assist in creation of permuted printed index) are described including indexer input and prompts, the shunting algorithm, and sorting and printing routines. Fourteen…

  15. Techniques for video compression

    NASA Technical Reports Server (NTRS)

    Wu, Chwan-Hwa

    1995-01-01

    In this report, we present our study on multiprocessor implementation of a MPEG2 encoding algorithm. First, we compare two approaches to implementing video standards, VLSI technology and multiprocessor processing, in terms of design complexity, applications, and cost. Then we evaluate the functional modules of MPEG2 encoding process in terms of their computation time. Two crucial modules are identified based on this evaluation. Then we present our experimental study on the multiprocessor implementation of the two crucial modules. Data partitioning is used for job assignment. Experimental results show that high speedup ratio and good scalability can be achieved by using this kind of job assignment strategy.

  16. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    NASA Astrophysics Data System (ADS)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from multiple-purpose modules. In the last part of the thesis a well known optimization method (the Broyden-Fletcher-Goldfarb-Shanno memoryless quasi -Newton method) is applied to simple classification problems and shown to be superior to the "error back-propagation" algorithm for numerical stability, automatic selection of parameters, and convergence properties.

  17. Feature Based Retention Time Alignment for Improved HDX MS Analysis

    NASA Astrophysics Data System (ADS)

    Venable, John D.; Scuba, William; Brock, Ansgar

    2013-04-01

    An algorithm for retention time alignment of mass shifted hydrogen-deuterium exchange (HDX) data based on an iterative distance minimization procedure is described. The algorithm performs pairwise comparisons in an iterative fashion between a list of features from a reference file and a file to be time aligned to calculate a retention time mapping function. Features are characterized by their charge, retention time and mass of the monoisotopic peak. The algorithm is able to align datasets with mass shifted features, which is a prerequisite for aligning hydrogen-deuterium exchange mass spectrometry datasets. Confidence assignments from the fully automated processing of a commercial HDX software package are shown to benefit significantly from retention time alignment prior to extraction of deuterium incorporation values.

  18. PDB_TM: selection and membrane localization of transmembrane proteins in the protein data bank.

    PubMed

    Tusnády, Gábor E; Dosztányi, Zsuzsanna; Simon, István

    2005-01-01

    PDB_TM is a database for transmembrane proteins with known structures. It aims to collect all transmembrane proteins that are deposited in the protein structure database (PDB) and to determine their membrane-spanning regions. These assignments are based on the TMDET algorithm, which uses only structural information to locate the most likely position of the lipid bilayer and to distinguish between transmembrane and globular proteins. This algorithm was applied to all PDB entries and the results were collected in the PDB_TM database. By using TMDET algorithm, the PDB_TM database can be automatically updated every week, keeping it synchronized with the latest PDB updates. The PDB_TM database is available at http://www.enzim.hu/PDB_TM.

  19. Simulation of empty container logistic management at depot

    NASA Astrophysics Data System (ADS)

    Sze, San-Nah; Sek, Siaw-Ying Doreen; Chiew, Kang-Leng; Tiong, Wei-King

    2017-07-01

    This study focuses on the empty container management problem in a deficit regional area. Deficit area is the area having more export activities than the import activities, which always have a shortage of empty container. This environment has challenged the trading companies in the decision making in distributing the empty containers. A simulation model that fit to the environment is developed. Besides, a simple heuristic algorithm with some hard and soft constraints consideration are proposed to plan the logistic of empty container supply. Then, the feasible route with the minimum cost will be determined by applying the proposed heuristic algorithm. The heuristic algorithm can be divided into three main phases which are data sorting, data assigning and time window updating.

  20. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    NASA Astrophysics Data System (ADS)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  1. Automatically Generated Algorithms for the Vertex Coloring Problem

    PubMed Central

    Contreras Bolton, Carlos; Gatica, Gustavo; Parada, Víctor

    2013-01-01

    The vertex coloring problem is a classical problem in combinatorial optimization that consists of assigning a color to each vertex of a graph such that no adjacent vertices share the same color, minimizing the number of colors used. Despite the various practical applications that exist for this problem, its NP-hardness still represents a computational challenge. Some of the best computational results obtained for this problem are consequences of hybridizing the various known heuristics. Automatically revising the space constituted by combining these techniques to find the most adequate combination has received less attention. In this paper, we propose exploring the heuristics space for the vertex coloring problem using evolutionary algorithms. We automatically generate three new algorithms by combining elementary heuristics. To evaluate the new algorithms, a computational experiment was performed that allowed comparing them numerically with existing heuristics. The obtained algorithms present an average 29.97% relative error, while four other heuristics selected from the literature present a 59.73% error, considering 29 of the more difficult instances in the DIMACS benchmark. PMID:23516506

  2. Definition of an Enhanced Map-Matching Algorithm for Urban Environments with Poor GNSS Signal Quality.

    PubMed

    Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio

    2016-02-04

    Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle's location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent.

  3. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    NASA Astrophysics Data System (ADS)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  4. Configuring Airspace Sectors with Approximate Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Gupta, Pramod

    2010-01-01

    In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.

  5. Definition of an Enhanced Map-Matching Algorithm for Urban Environments with Poor GNSS Signal Quality

    PubMed Central

    Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio

    2016-01-01

    Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle’s location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent. PMID:26861320

  6. Combining automated peak tracking in SAR by NMR with structure-based backbone assignment from 15N-NOESY

    PubMed Central

    2012-01-01

    Background Chemical shift mapping is an important technique in NMR-based drug screening for identifying the atoms of a target protein that potentially bind to a drug molecule upon the molecule's introduction in increasing concentrations. The goal is to obtain a mapping of peaks with known residue assignment from the reference spectrum of the unbound protein to peaks with unknown assignment in the target spectrum of the bound protein. Although a series of perturbed spectra help to trace a path from reference peaks to target peaks, a one-to-one mapping generally is not possible, especially for large proteins, due to errors, such as noise peaks, missing peaks, missing but then reappearing, overlapped, and new peaks not associated with any peaks in the reference. Due to these difficulties, the mapping is typically done manually or semi-automatically, which is not efficient for high-throughput drug screening. Results We present PeakWalker, a novel peak walking algorithm for fast-exchange systems that models the errors explicitly and performs many-to-one mapping. On the proteins: hBclXL, UbcH5B, and histone H1, it achieves an average accuracy of over 95% with less than 1.5 residues predicted per target peak. Given these mappings as input, we present PeakAssigner, a novel combined structure-based backbone resonance and NOE assignment algorithm that uses just 15N-NOESY, while avoiding TOCSY experiments and 13C-labeling, to resolve the ambiguities for a one-to-one mapping. On the three proteins, it achieves an average accuracy of 94% or better. Conclusions Our mathematical programming approach for modeling chemical shift mapping as a graph problem, while modeling the errors directly, is potentially a time- and cost-effective first step for high-throughput drug screening based on limited NMR data and homologous 3D structures. PMID:22536902

  7. Medicaid program choice, inertia and adverse selection.

    PubMed

    Marton, James; Yelowitz, Aaron; Talbert, Jeffery C

    2017-12-01

    In 2012, Kentucky implemented Medicaid managed care statewide, auto-assigned enrollees to three plans, and allowed switching. Using administrative data, we find that the state's auto-assignment algorithm most heavily weighted cost-minimization and plan balancing, and placed little weight on the quality of the enrollee-plan match. Immobility - apparently driven by health plan inertia - contributed to the success of the cost-minimization strategy, as more than half of enrollees auto-assigned to even the lowest quality plans did not opt-out. High-cost enrollees were more likely to opt-out of their auto-assigned plan, creating adverse selection. The plan with arguably the highest quality incurred the largest initial profit margin reduction due to adverse selection prior to risk adjustment, as it attracted a disproportionate share of high-cost enrollees. The presence of such selection, caused by differential degrees of mobility, raises concerns about the long run viability of the Medicaid managed care market without such risk adjustment. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Empirical Equation Based Chirality (n, m) Assignment of Semiconducting Single Wall Carbon Nanotubes from Resonant Raman Scattering Data

    PubMed Central

    Arefin, Md Shamsul

    2012-01-01

    This work presents a technique for the chirality (n, m) assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n− m) with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m) of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot. PMID:28348319

  9. A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Jolai, Fariborz; Assadipour, Ghazal

    Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.

  10. Research on schedulers for astronomical observatories

    NASA Astrophysics Data System (ADS)

    Colome, Josep; Colomer, Pau; Guàrdia, Josep; Ribas, Ignasi; Campreciós, Jordi; Coiffard, Thierry; Gesa, Lluis; Martínez, Francesc; Rodler, Florian

    2012-09-01

    The main task of a scheduler applied to astronomical observatories is the time optimization of the facility and the maximization of the scientific return. Scheduling of astronomical observations is an example of the classical task allocation problem known as the job-shop problem (JSP), where N ideal tasks are assigned to M identical resources, while minimizing the total execution time. A problem of higher complexity, called the Flexible-JSP (FJSP), arises when the tasks can be executed by different resources, i.e. by different telescopes, and it focuses on determining a routing policy (i.e., which machine to assign for each operation) other than the traditional scheduling decisions (i.e., to determine the starting time of each operation). In most cases there is no single best approach to solve the planning system and, therefore, various mathematical algorithms (Genetic Algorithms, Ant Colony Optimization algorithms, Multi-Objective Evolutionary algorithms, etc.) are usually considered to adapt the application to the system configuration and task execution constraints. The scheduling time-cycle is also an important ingredient to determine the best approach. A shortterm scheduler, for instance, has to find a good solution with the minimum computation time, providing the system with the capability to adapt the selected task to varying execution constraints (i.e., environment conditions). We present in this contribution an analysis of the task allocation problem and the solutions currently in use at different astronomical facilities. We also describe the schedulers for three different projects (CTA, CARMENES and TJO) where the conclusions of this analysis are applied to develop a suitable routine.

  11. Minimizing the Workup of Blood Culture Contaminants: Implementation and Evaluation of a Laboratory-Based Algorithm

    PubMed Central

    Richter, S. S.; Beekmann, S. E.; Croco, J. L.; Diekema, D. J.; Koontz, F. P.; Pfaller, M. A.; Doern, G. V.

    2002-01-01

    An algorithm was implemented in the clinical microbiology laboratory to assess the clinical significance of organisms that are often considered contaminants (coagulase-negative staphylococci, aerobic and anaerobic diphtheroids, Micrococcus spp., Bacillus spp., and viridans group streptococci) when isolated from blood cultures. From 25 August 1999 through 30 April 2000, 12,374 blood cultures were submitted to the University of Iowa Clinical Microbiology Laboratory. Potential contaminants were recovered from 495 of 1,040 positive blood cultures. If one or more additional blood cultures were obtained within ±48 h and all were negative, the isolate was considered a contaminant. Antimicrobial susceptibility testing (AST) of these probable contaminants was not performed unless requested. If no additional blood cultures were submitted or there were additional positive blood cultures (within ±48 h), a pathology resident gathered patient clinical information and made a judgment regarding the isolate's significance. To evaluate the accuracy of these algorithm-based assignments, a nurse epidemiologist in approximately 60% of the cases performed a retrospective chart review. Agreement between the findings of the retrospective chart review and the automatic classification of the isolates with additional negative blood cultures as probable contaminants occurred among 85.8% of 225 isolates. In response to physician requests, AST had been performed on 15 of the 32 isolates with additional negative cultures considered significant by retrospective chart review. Agreement of pathology resident assignment with the retrospective chart review occurred among 74.6% of 71 isolates. The laboratory-based algorithm provided an acceptably accurate means for assessing the clinical significance of potential contaminants recovered from blood cultures. PMID:12089259

  12. Eigensolution of finite element problems in a completely connected parallel architecture

    NASA Technical Reports Server (NTRS)

    Akl, F.; Morel, M.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm is successfully implemented on a tightly coupled MIMD parallel processor. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts, and the dimension of the subspace on the performance of the algorithm is investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18, and 3.61 are achieved on two, four, six, and eight processors, respectively.

  13. Decision-level fusion of SAR and IR sensor information for automatic target detection

    NASA Astrophysics Data System (ADS)

    Cho, Young-Rae; Yim, Sung-Hyuk; Cho, Hyun-Woong; Won, Jin-Ju; Song, Woo-Jin; Kim, So-Hyeon

    2017-05-01

    We propose a decision-level architecture that combines synthetic aperture radar (SAR) and an infrared (IR) sensor for automatic target detection. We present a new size-based feature, called target-silhouette to reduce the number of false alarms produced by the conventional target-detection algorithm. Boolean Map Visual Theory is used to combine a pair of SAR and IR images to generate the target-enhanced map. Then basic belief assignment is used to transform this map into a belief map. The detection results of sensors are combined to build the target-silhouette map. We integrate the fusion mass and the target-silhouette map on the decision level to exclude false alarms. The proposed algorithm is evaluated using a SAR and IR synthetic database generated by SE-WORKBENCH simulator, and compared with conventional algorithms. The proposed fusion scheme achieves higher detection rate and lower false alarm rate than the conventional algorithms.

  14. Dynamic bandwidth allocation based on multiservice in software-defined wavelength-division multiplexing time-division multiplexing passive optical network

    NASA Astrophysics Data System (ADS)

    Wang, Fu; Liu, Bo; Zhang, Lijia; Jin, Feifei; Zhang, Qi; Tian, Qinghua; Tian, Feng; Rao, Lan; Xin, Xiangjun

    2017-03-01

    The wavelength-division multiplexing passive optical network (WDM-PON) is a potential technology to carry multiple services in an optical access network. However, it has the disadvantages of high cost and an immature technique for users. A software-defined WDM/time-division multiplexing PON was proposed to meet the requirements of high bandwidth, high performance, and multiple services. A reasonable and effective uplink dynamic bandwidth allocation algorithm was proposed. A controller with dynamic wavelength and slot assignment was introduced, and a different optical dynamic bandwidth management strategy was formulated flexibly for services of different priorities according to the network loading. The simulation compares the proposed algorithm with the interleaved polling with adaptive cycle time algorithm. The algorithm shows better performance in average delay, throughput, and bandwidth utilization. The results show that the delay is reduced to 62% and the throughput is improved by 35%.

  15. Treatment Algorithms Based on Tumor Molecular Profiling: The Essence of Precision Medicine Trials.

    PubMed

    Le Tourneau, Christophe; Kamal, Maud; Tsimberidou, Apostolia-Maria; Bedard, Philippe; Pierron, Gaëlle; Callens, Céline; Rouleau, Etienne; Vincent-Salomon, Anne; Servant, Nicolas; Alt, Marie; Rouzier, Roman; Paoletti, Xavier; Delattre, Olivier; Bièche, Ivan

    2016-04-01

    With the advent of high-throughput molecular technologies, several precision medicine (PM) studies are currently ongoing that include molecular screening programs and PM clinical trials. Molecular profiling programs establish the molecular profile of patients' tumors with the aim to guide therapy based on identified molecular alterations. The aim of prospective PM clinical trials is to assess the clinical utility of tumor molecular profiling and to determine whether treatment selection based on molecular alterations produces superior outcomes compared with unselected treatment. These trials use treatment algorithms to assign patients to specific targeted therapies based on tumor molecular alterations. These algorithms should be governed by fixed rules to ensure standardization and reproducibility. Here, we summarize key molecular, biological, and technical criteria that, in our view, should be addressed when establishing treatment algorithms based on tumor molecular profiling for PM trials. © The Author 2015. Published by Oxford University Press.

  16. Annealing Ant Colony Optimization with Mutation Operator for Solving TSP.

    PubMed

    Mohsen, Abdulqader M

    2016-01-01

    Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality.

  17. High-precision isotopic characterization of USGS reference materials by TIMS and MC-ICP-MS

    NASA Astrophysics Data System (ADS)

    Weis, Dominique; Kieffer, Bruno; Maerschalk, Claude; Barling, Jane; de Jong, Jeroen; Williams, Gwen A.; Hanano, Diane; Pretorius, Wilma; Mattielli, Nadine; Scoates, James S.; Goolaerts, Arnaud; Friedman, Richard M.; Mahoney, J. Brian

    2006-08-01

    The Pacific Centre for Isotopic and Geochemical Research (PCIGR) at the University of British Columbia has undertaken a systematic analysis of the isotopic (Sr, Nd, and Pb) compositions and concentrations of a broad compositional range of U.S. Geological Survey (USGS) reference materials, including basalt (BCR-1, 2; BHVO-1, 2), andesite (AGV-1, 2), rhyolite (RGM-1, 2), syenite (STM-1, 2), granodiorite (GSP-2), and granite (G-2, 3). USGS rock reference materials are geochemically well characterized, but there is neither a systematic methodology nor a database for radiogenic isotopic compositions, even for the widely used BCR-1. This investigation represents the first comprehensive, systematic analysis of the isotopic composition and concentration of USGS reference materials and provides an important database for the isotopic community. In addition, the range of equipment at the PCIGR, including a Nu Instruments Plasma MC-ICP-MS, a Thermo Finnigan Triton TIMS, and a Thermo Finnigan Element2 HR-ICP-MS, permits an assessment and comparison of the precision and accuracy of isotopic analyses determined by both the TIMS and MC-ICP-MS methods (e.g., Nd isotopic compositions). For each of the reference materials, 5 to 10 complete replicate analyses provide coherent isotopic results, all with external precision below 30 ppm (2 SD) for Sr and Nd isotopic compositions (27 and 24 ppm for TIMS and MC-ICP-MS, respectively). Our results also show that the first- and second-generation USGS reference materials have homogeneous Sr and Nd isotopic compositions. Nd isotopic compositions by MC-ICP-MS and TIMS agree to within 15 ppm for all reference materials. Interlaboratory MC-ICP-MS comparisons show excellent agreement for Pb isotopic compositions; however, the reproducibility is not as good as for Sr and Nd. A careful, sequential leaching experiment of three first- and second-generation reference materials (BCR, BHVO, AGV) indicates that the heterogeneity in Pb isotopic compositions, and concentrations, could be directly related to contamination by the steel (mortar/pestle) used to process the materials. Contamination also accounts for the high concentrations of certain other trace elements (e.g., Li, Mo, Cd, Sn, Sb, W) in various USGS reference materials.

  18. Petrogenesis of early Jurassic basalts in southern Jiangxi Province, South China: Implications for the thermal state of the Mesozoic mantle beneath South China

    NASA Astrophysics Data System (ADS)

    Cen, Tao; Li, Wu-xian; Wang, Xuan-ce; Pang, Chong-jin; Li, Zheng-xiang; Xing, Guang-fu; Zhao, Xi-lin; Tao, Jihua

    2016-07-01

    Early Jurassic bimodal volcanic and intrusive rocks in southern South China show distinct associations and distribution patterns in comparison with those of the Middle Jurassic and Cretaceous rocks in the area. It is widely accepted that these rocks formed in an extensional setting, although the timing of the onset and the tectonic driver for extension are debated. Here, we present systematic LA-ICP-MS zircon U-Pb ages, whole-rock geochemistry and Sr-Nd isotope data for bimodal volcanic rocks from the Changpu Formation in the Changpu-Baimianshi and Dongkeng-Linjiang basins in southern Jiangxi Province, South China. Zircon U-Pb ages indicate that the bimodal volcanic rocks erupted at ca. 190 Ma, contemporaneous with the Fankeng basalts ( 183 Ma). A compilation of geochronological results demonstrates that basin-scale basaltic eruptions occurred during the Early Jurassic within a relatively short interval (< 5 Ma). These Early Jurassic basalts have tholeiitic compositions and OIB-like trace element distribution patterns. Geochemical analyses show that the basalts were derived from depleted asthenospheric mantle, dominated by a volatile-free peridotite source. The calculated primary melt compositions suggest that the basalts formed at 1.9-2.1 GPa, with melting temperatures of 1378 °C-1405 °C and a mantle potential temperature (TP) ranging from 1383 °C to 1407 °C. The temperature range is somewhat hotter than normal mid-ocean-basalt (MORB) mantle but similar to an intra-plate continental mantle setting, such as the Basin and Range Province in western North America. This study provides an important constraint on the Early Jurassic mantle thermal state beneath South China. Reference: Raczek, I., Stoll, B., Hofmann, A.W., Jochum, K.P. 2001. High-precision trace element data for the USGS reference materials BCR-1, BCR-2, BHVO-1, BHVO-2, AGV-1, AGV-2, DTS-1, DTS-2, GSP-1 and GSP-2 by ID-TIMS and MIC-SSMS. Geostandards Newsletter 25(1), 77-86.

  19. Parent-mediated communication-focused treatment in children with autism (PACT): a randomised controlled trial.

    PubMed

    Green, Jonathan; Charman, Tony; McConachie, Helen; Aldred, Catherine; Slonims, Vicky; Howlin, Pat; Le Couteur, Ann; Leadbitter, Kathy; Hudry, Kristelle; Byford, Sarah; Barrett, Barbara; Temple, Kathryn; Macdonald, Wendy; Pickles, Andrew

    2010-06-19

    Results of small trials suggest that early interventions for social communication are effective for the treatment of autism in children. We therefore investigated the efficacy of such an intervention in a larger trial. Children with core autism (aged 2 years to 4 years and 11 months) were randomly assigned in a one-to-one ratio to a parent-mediated communication-focused (Preschool Autism Communication Trial [PACT]) intervention or treatment as usual at three specialist centres in the UK. Those assigned to PACT were also given treatment as usual. Randomisation was by use of minimisation of probability in the marginal distribution of treatment centre, age (42 months), and autism severity (Autism Diagnostic Observation Schedule-Generic [ADOS-G] algorithm score 12-17 or 18-24). Primary outcome was severity of autism symptoms (a total score of social communication algorithm items from ADOS-G, higher score indicating greater severity) at 13 months. Complementary secondary outcomes were measures of parent-child interaction, child language, and adaptive functioning in school. Analysis was by intention to treat. This study is registered as an International Standard Randomised Controlled Trial, number ISRCTN58133827. 152 children were recruited. 77 were assigned to PACT (London [n=26], Manchester [n=26], and Newcastle [n=25]); and 75 to treatment as usual (London [n=26], Manchester [n=26], and Newcastle [n=23]). At the 13-month endpoint, the severity of symptoms was reduced by 3.9 points (SD 4.7) on the ADOS-G algorithm in the group assigned to PACT, and 2.9 (3.9) in the group assigned to treatment as usual, representing a between-group effect size of -0.24 (95% CI -0.59 to 0.11), after adjustment for centre, sex, socioeconomic status, age, and verbal and non-verbal abilities. Treatment effect was positive for parental synchronous response to child (1.22, 0.85 to 1.59), child initiations with parent (0.41, 0.08 to 0.74), and for parent-child shared attention (0.33, -0.02 to 0.68). Effects on directly assessed language and adaptive functioning in school were small. On the basis of our findings, we cannot recommend the addition of the PACT intervention to treatment as usual for the reduction of autism symptoms; however, a clear benefit was noted for parent-child dyadic social communication. UK Medical Research Council, and UK Department for Children, Schools and Families. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. Software for peak finding and elemental composition assignment for glycosaminoglycan tandem mass spectra.

    PubMed

    Hogan, John D; Klein, Joshua A; Wu, Jiandong; Chopra, Pradeep; Boons, Geert-Jan; Carvalho, Luis; Lin, Cheng; Zaia, Joseph

    2018-04-03

    Glycosaminoglycans (GAGs) covalently linked to proteoglycans (PGs) are characterized by repeating disaccharide units and variable sulfation patterns along the chain. GAG length and sulfation patterns impact disease etiology, cellular signaling, and structural support for cells. We and others have demonstrated the usefulness of tandem mass spectrometry (MS2) for assigning the structures of GAG saccharides; however, manual interpretation of tandem mass spectra is time-consuming, so computational methods must be employed. In the proteomics domain, the identification of monoisotopic peaks and charge states relies on algorithms that use averagine, or the average building block of the compound class being analyzed. While these methods perform well for protein and peptide spectra, they perform poorly on GAG tandem mass spectra, due to the fact that a single average building block does not characterize the variable sulfation of GAG disaccharide units. In addition, it is necessary to assign product ion isotope patterns in order to interpret the tandem mass spectra of GAG saccharides. To address these problems, we developed GAGfinder, the first tandem mass spectrum peak finding algorithm developed specifically for GAGs. We define peak finding as assigning experimental isotopic peaks directly to a given product ion composition, as opposed to deconvolution or peak picking, which are terms more accurately describing the existing methods previously mentioned. GAGfinder is a targeted, brute force approach to spectrum analysis that utilizes precursor composition information to generate all theoretical fragments. GAGfinder also performs peak isotope composition annotation, which is typically a subsequent step for averagine-based methods. Data are available via ProteomeXchange with identifier PXD009101. Published under license by The American Society for Biochemistry and Molecular Biology, Inc.

  1. Surface acoustic wave coding for orthogonal frequency coded devices

    NASA Technical Reports Server (NTRS)

    Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)

    2011-01-01

    Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.

  2. A Survey on Next-Generation Mixed Line Rate (MLR) and Energy-Driven Wavelength-Division Multiplexed (WDM) Optical Networks

    NASA Astrophysics Data System (ADS)

    Iyer, Sridhar

    2015-06-01

    With the ever-increasing traffic demands, infrastructure of the current 10 Gbps optical network needs to be enhanced. Further, since the energy crisis is gaining increasing concerns, new research topics need to be devised and technological solutions for energy conservation need to be investigated. In all-optical mixed line rate (MLR) network, feasibility of a lightpath is determined by the physical layer impairment (PLI) accumulation. Contrary to PLI-aware routing and wavelength assignment (PLIA-RWA) algorithm applicable for a 10 Gbps wavelength-division multiplexed (WDM) network, a new Routing, Wavelength, Modulation format assignment (RWMFA) algorithm is required for the MLR optical network. With the rapid growth of energy consumption in Information and Communication Technologies (ICT), recently, lot of attention is being devoted toward "green" ICT solutions. This article presents a review of different RWMFA (PLIA-RWA) algorithms for MLR networks, and surveys the most relevant research activities aimed at minimizing energy consumption in optical networks. In essence, this article presents a comprehensive and timely survey on a growing field of research, as it covers most aspects of MLR and energy-driven optical networks. Hence, the author aims at providing a comprehensive reference for the growing base of researchers who will work on MLR and energy-driven optical networks in the upcoming years. Finally, the article also identifies several open problems for future research.

  3. A New Algorithm Using Cross-Assignment for Label-Free Quantitation with LC/LTQ-FT MS

    PubMed Central

    Andreev, Victor P.; Li, Lingyun; Cao, Lei; Gu, Ye; Rejtar, Tomas; Wu, Shiaw-Lin; Karger, Barry L.

    2008-01-01

    A new algorithm is described for label-free quantitation of relative protein abundances across multiple complex proteomic samples. Q-MEND is based on the denoising and peak picking algorithm, MEND, previously developed in our laboratory. Q-MEND takes advantage of the high resolution and mass accuracy of the hybrid LTQFT MS mass spectrometer (or other high resolution mass spectrometers, such as a Q-TOF MS). The strategy, termed “cross-assignment”, is introduced to increase substantially the number of quantitated proteins. In this approach, all MS/MS identifications for the set of analyzed samples are combined into a master ID list, and then each LC/MS run is searched for the features that can be assigned to a specific identification from that master list. The reliability of quantitation is enhanced by quantitating separately all peptide charge states, along with a scoring procedure to filter out less reliable peptide abundance measurements. The effectiveness of Q-MEND is illustrated in the relative quantitative analysis of E.coli samples spiked with known amounts of non-E.coli protein digests. A mean quantitation accuracy of 7% and mean precision of 15% is demonstrated. Q-MEND can perform relative quantitation of a set of LC/MS datasets without manual intervention and can generate files compatible with the Guidelines for Proteomic Data Publication. PMID:17441747

  4. The CCSDS Lossless Data Compression Algorithm for Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Day, John H. (Technical Monitor)

    2001-01-01

    In the late 80's, when the author started working at the Goddard Space Flight Center (GSFC) for the National Aeronautics and Space Administration (NASA), several scientists there were in the process of formulating the next generation of Earth viewing science instruments, the Moderate Resolution Imaging Spectroradiometer (MODIS). The instrument would have over thirty spectral bands and would transmit enormous data through the communications channel. This was when the author was assigned the task of investigating lossless compression algorithms for space implementation to compress science data in order to reduce the requirement on bandwidth and storage.

  5. Investigation of cloud/water vapor motion winds from geostationary satellite

    NASA Technical Reports Server (NTRS)

    Nieman, Steve; Velden, Chris; Hayden, Kit; Menzel, Paul

    1993-01-01

    Work has been primarily focussed on three tasks: (1) comparison of wind fields produced at MSFC with the CO2 autowind/autoeditor system newly installed in NESDIS operations; (2) evaluation of techniques for improved tracer selection through use of cloud classification predictors; and (3) development of height assignment algorithm with water vapor channel radiances. The contract goal is to improve the CIMSS wind system by developing new techniques and assimilating better existing techniques. The work reported here was done in collaboration with the NESDIS scientists working on the operational winds software, so that NASA funded research can benefit NESDIS operational algorithms.

  6. Algorithmic tools for interpreting vital signs.

    PubMed

    Rathbun, Melina C; Ruth-Sahd, Lisa A

    2009-07-01

    Today's complex world of nursing practice challenges nurse educators to develop teaching methods that promote critical thinking skills and foster quick problem solving in the novice nurse. Traditional pedagogies previously used in the classroom and clinical setting are no longer adequate to prepare nursing students for entry into practice. In addition, educators have expressed frustration when encouraging students to apply newly learned theoretical content to direct the care of assigned patients in the clinical setting. This article presents algorithms as an innovative teaching strategy to guide novice student nurses in the interpretation and decision making related to vital sign assessment in an acute care setting.

  7. Distributed-Memory Computing With the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)

    NASA Technical Reports Server (NTRS)

    Riley, Christopher J.; Cheatwood, F. McNeil

    1997-01-01

    The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA), a Navier-Stokes solver, has been modified for use in a parallel, distributed-memory environment using the Message-Passing Interface (MPI) standard. A standard domain decomposition strategy is used in which the computational domain is divided into subdomains with each subdomain assigned to a processor. Performance is examined on dedicated parallel machines and a network of desktop workstations. The effect of domain decomposition and frequency of boundary updates on performance and convergence is also examined for several realistic configurations and conditions typical of large-scale computational fluid dynamic analysis.

  8. Automated Guided Vehicle For Phsically Handicapped People - A Cost Effective Approach

    NASA Astrophysics Data System (ADS)

    Kumar, G. Arun, Dr.; Sivasubramaniam, Mr. A.

    2017-12-01

    Automated Guided vehicle (AGV) is like a robot that can deliver the materials from the supply area to the technician automatically. This is faster and more efficient. The robot can be accessed wirelessly. A technician can directly control the robot to deliver the components rather than control it via a human operator (over phone, computer etc. who has to program the robot or ask a delivery person to make the delivery). The vehicle is automatically guided through its ways. To avoid collisions a proximity sensor is attached to the system. The sensor senses the signals of the obstacles and can stop the vehicle in the presence of obstacles. Thus vehicle can avoid accidents that can be very useful to the present industrial trend and material handling and equipment handling will be automated and easy time saving methodology.

  9. Determining the Number of Clusters in a Data Set Without Graphical Interpretation

    NASA Technical Reports Server (NTRS)

    Aguirre, Nathan S.; Davies, Misty D.

    2011-01-01

    Cluster analysis is a data mining technique that is meant ot simplify the process of classifying data points. The basic clustering process requires an input of data points and the number of clusters wanted. The clustering algorithm will then pick starting C points for the clusters, which can be either random spatial points or random data points. It then assigns each data point to the nearest C point where "nearest usually means Euclidean distance, but some algorithms use another criterion. The next step is determining whether the clustering arrangement this found is within a certain tolerance. If it falls within this tolerance, the process ends. Otherwise the C points are adjusted based on how many data points are in each cluster, and the steps repeat until the algorithm converges,

  10. Community Detection Algorithm Combining Stochastic Block Model and Attribute Data Clustering

    NASA Astrophysics Data System (ADS)

    Kataoka, Shun; Kobayashi, Takuto; Yasuda, Muneki; Tanaka, Kazuyuki

    2016-11-01

    We propose a new algorithm to detect the community structure in a network that utilizes both the network structure and vertex attribute data. Suppose we have the network structure together with the vertex attribute data, that is, the information assigned to each vertex associated with the community to which it belongs. The problem addressed this paper is the detection of the community structure from the information of both the network structure and the vertex attribute data. Our approach is based on the Bayesian approach that models the posterior probability distribution of the community labels. The detection of the community structure in our method is achieved by using belief propagation and an EM algorithm. We numerically verified the performance of our method using computer-generated networks and real-world networks.

  11. Network control processor for a TDMA system

    NASA Astrophysics Data System (ADS)

    Suryadevara, Omkarmurthy; Debettencourt, Thomas J.; Shulman, R. B.

    Two unique aspects of designing a network control processor (NCP) to monitor and control a demand-assigned, time-division multiple-access (TDMA) network are described. The first involves the implementation of redundancy by synchronizing the databases of two geographically remote NCPs. The two sets of databases are kept in synchronization by collecting data on both systems, transferring databases, sending incremental updates, and the parallel updating of databases. A periodic audit compares the checksums of the databases to ensure synchronization. The second aspect involves the use of a tracking algorithm to dynamically reallocate TDMA frame space. This algorithm detects and tracks current and long-term load changes in the network. When some portions of the network are overloaded while others have excess capacity, the algorithm automatically calculates and implements a new burst time plan.

  12. TREEGRAD: a grading program for eastern hardwoods

    Treesearch

    J.W. Stringer; D.W. Cremeans

    1991-01-01

    Assigning tree grades to eastern hardwoods is often a difficult task for neophyte graders. Recently several "dichotomous keys" have been developed for training graders in the USFS hardwood tree grading system. TREEGRAD uses the Tree Grading Algorithm (TGA) for determining grades from defect location data and is designed to be used as a teaching aid.

  13. Topic Transition in Educational Videos Using Visually Salient Words

    ERIC Educational Resources Information Center

    Gandhi, Ankit; Biswas, Arijit; Deshmukh, Om

    2015-01-01

    In this paper, we propose a visual saliency algorithm for automatically finding the topic transition points in an educational video. First, we propose a method for assigning a saliency score to each word extracted from an educational video. We design several mid-level features that are indicative of visual saliency. The optimal feature combination…

  14. Information needs for increasing log transport efficiency

    Treesearch

    Timothy P. McDonald; Steven E. Taylor; Robert B. Rummer; Jorge Valenzuela

    2001-01-01

    Three methods of dispatching trucks to loggers were tested using a log transport simulation model: random allocation, fixed assignment of trucks to loggers, and dispatch based on knowledge of the current status of trucks and loggers within the system. This 'informed' dispatch algorithm attempted to minimize the difference in time between when a logger would...

  15. A New Algorithm to Create Balanced Teams Promoting More Diversity

    ERIC Educational Resources Information Center

    Dias, Teresa Galvão; Borges, José

    2017-01-01

    The problem of assigning students to teams can be described as maximising their profiles diversity within teams while minimising the differences among teams. This problem is commonly known as the maximally diverse grouping problem and it is usually formulated as maximising the sum of the pairwise distances among students within teams. We propose…

  16. An Algorithm for Converting Contours to Elevation Grids.

    ERIC Educational Resources Information Center

    Reid-Green, Keith S.

    Some of the test questions for the National Council of Architectural Registration Boards deal with the site, including drainage, regrading, and the like. Some questions are most easily scored by examining contours, but others, such as water flow questions, are best scored from a grid in which each element is assigned its average elevation. This…

  17. About approximation of integer factorization problem by the combination fixed-point iteration method and Bayesian rounding for quantum cryptography

    NASA Astrophysics Data System (ADS)

    Ogorodnikov, Yuri; Khachay, Michael; Pljonkin, Anton

    2018-04-01

    We describe the possibility of employing the special case of the 3-SAT problem stemming from the well known integer factorization problem for the quantum cryptography. It is known, that for every instance of our 3-SAT setting the given 3-CNF is satisfiable by a unique truth assignment, and the goal is to find this assignment. Since the complexity status of the factorization problem is still undefined, development of approximation algorithms and heuristics adopts interest of numerous researchers. One of promising approaches to construction of approximation techniques is based on real-valued relaxation of the given 3-CNF followed by minimizing of the appropriate differentiable loss function, and subsequent rounding of the fractional minimizer obtained. Actually, algorithms developed this way differ by the rounding scheme applied on their final stage. We propose a new rounding scheme based on Bayesian learning. The article shows that the proposed method can be used to determine the security in quantum key distribution systems. In the quantum distribution the Shannon rules is applied and the factorization problem is paramount when decrypting secret keys.

  18. Congestion patterns of electric vehicles with limited battery capacity.

    PubMed

    Jing, Wentao; Ramezani, Mohsen; An, Kun; Kim, Inhi

    2018-01-01

    The path choice behavior of battery electric vehicle (BEV) drivers is influenced by the lack of public charging stations, limited battery capacity, range anxiety and long battery charging time. This paper investigates the congestion/flow pattern captured by stochastic user equilibrium (SUE) traffic assignment problem in transportation networks with BEVs, where the BEV paths are restricted by their battery capacities. The BEV energy consumption is assumed to be a linear function of path length and path travel time, which addresses both path distance limit problem and road congestion effect. A mathematical programming model is proposed for the path-based SUE traffic assignment where the path cost is the sum of the corresponding link costs and a path specific out-of-energy penalty. We then apply the convergent Lagrangian dual method to transform the original problem into a concave maximization problem and develop a customized gradient projection algorithm to solve it. A column generation procedure is incorporated to generate the path set. Finally, two numerical examples are presented to demonstrate the applicability of the proposed model and the solution algorithm.

  19. Localizing text in scene images by boundary clustering, stroke segmentation, and string fragment classification.

    PubMed

    Yi, Chucai; Tian, Yingli

    2012-09-01

    In this paper, we propose a novel framework to extract text regions from scene images with complex backgrounds and multiple text appearances. This framework consists of three main steps: boundary clustering (BC), stroke segmentation, and string fragment classification. In BC, we propose a new bigram-color-uniformity-based method to model both text and attachment surface, and cluster edge pixels based on color pairs and spatial positions into boundary layers. Then, stroke segmentation is performed at each boundary layer by color assignment to extract character candidates. We propose two algorithms to combine the structural analysis of text stroke with color assignment and filter out background interferences. Further, we design a robust string fragment classification based on Gabor-based text features. The features are obtained from feature maps of gradient, stroke distribution, and stroke width. The proposed framework of text localization is evaluated on scene images, born-digital images, broadcast video images, and images of handheld objects captured by blind persons. Experimental results on respective datasets demonstrate that the framework outperforms state-of-the-art localization algorithms.

  20. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  1. Sensor selection cost optimisation for tracking structurally cyclic systems: a P-order solution

    NASA Astrophysics Data System (ADS)

    Doostmohammadian, M.; Zarrabi, H.; Rabiee, H. R.

    2017-08-01

    Measurements and sensing implementations impose certain cost in sensor networks. The sensor selection cost optimisation is the problem of minimising the sensing cost of monitoring a physical (or cyber-physical) system. Consider a given set of sensors tracking states of a dynamical system for estimation purposes. For each sensor assume different costs to measure different (realisable) states. The idea is to assign sensors to measure states such that the global cost is minimised. The number and selection of sensor measurements need to ensure the observability to track the dynamic state of the system with bounded estimation error. The main question we address is how to select the state measurements to minimise the cost while satisfying the observability conditions. Relaxing the observability condition for structurally cyclic systems, the main contribution is to propose a graph theoretic approach to solve the problem in polynomial time. Note that polynomial time algorithms are suitable for large-scale systems as their running time is upper-bounded by a polynomial expression in the size of input for the algorithm. We frame the problem as a linear sum assignment with solution complexity of ?.

  2. Congestion patterns of electric vehicles with limited battery capacity

    PubMed Central

    2018-01-01

    The path choice behavior of battery electric vehicle (BEV) drivers is influenced by the lack of public charging stations, limited battery capacity, range anxiety and long battery charging time. This paper investigates the congestion/flow pattern captured by stochastic user equilibrium (SUE) traffic assignment problem in transportation networks with BEVs, where the BEV paths are restricted by their battery capacities. The BEV energy consumption is assumed to be a linear function of path length and path travel time, which addresses both path distance limit problem and road congestion effect. A mathematical programming model is proposed for the path-based SUE traffic assignment where the path cost is the sum of the corresponding link costs and a path specific out-of-energy penalty. We then apply the convergent Lagrangian dual method to transform the original problem into a concave maximization problem and develop a customized gradient projection algorithm to solve it. A column generation procedure is incorporated to generate the path set. Finally, two numerical examples are presented to demonstrate the applicability of the proposed model and the solution algorithm. PMID:29543875

  3. Some insights on hard quadratic assignment problem instances

    NASA Astrophysics Data System (ADS)

    Hussin, Mohamed Saifullah

    2017-11-01

    Since the formal introduction of metaheuristics, a huge number Quadratic Assignment Problem (QAP) instances have been introduced. Those instances however are loosely-structured, and therefore made it difficult to perform any systematic analysis. The QAPLIB for example, is a library that contains a huge number of QAP benchmark instances that consists of instances with different size and structure, but with a very limited availability for every instance type. This prevents researchers from performing organized study on those instances, such as parameter tuning and testing. In this paper, we will discuss several hard instances that have been introduced over the years, and algorithms that have been used for solving them.

  4. Fault Identification by Unsupervised Learning Algorithm

    NASA Astrophysics Data System (ADS)

    Nandan, S.; Mannu, U.

    2012-12-01

    Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault orientations for finite source inversions. The faults produced by the method exhibited good correlation with the fault planes obtained by focal mechanism solutions and previously mapped faults.

  5. Noninvasive scoring algorithm to identify significant liver fibrosis among treatment-naive chronic hepatitis C patients.

    PubMed

    Koller, Tomas; Kollerova, Jana; Huorka, Martin; Meciarova, Iveta; Payer, Juraj

    2014-10-01

    Staging for liver fibrosis is recommended in the management of hepatitis C as an argument for treatment priority. Our aim was to construct a noninvasive algorithm to predict the significant liver fibrosis (SLF) using common biochemical markers and compare it with some existing models. The study group included 104 consecutive cases; SLF was defined as Ishak fibrosis stage greater than 2. The patient population was assigned randomly to the training and the validation groups of 52 cases each. The training group was used to construct the algorithm from parameters with the best predictive value. Each parameter was assigned a score that was added to the noninvasive fibrosis score (NFS). The accuracy of NFS in predicting SLF was tested in the validation group and compared with APRI, FIB4, and Forns models. Our algorithm used age, alkaline phosphatase, ferritin, APRI, α2 macroglobulin, and insulin and the NFS ranged from -4 to 5. The probability of SLF was 2.6 versus 77.1% in NFS<0 and NFS>0, leaving NFS=0 in a gray zone (29.8% of cases). The area under the receiver operating curve was 0.895 and 0.886, with a specificity, sensitivity, and diagnostic accuracy of 85.1, 92.3, and 87.5% versus 77.8, 100, and 87.9% for the training and the validation group. In comparison, the area under the receiver operating curve for APRI=0.810, FIB4=0.781, and Forns=0.703 with a diagnostic accuracy of 83.9, 72.3, and 62% and gray zone cases in 46.15, 37.5, and 44.2%. We devised an algorithm to calculate the NFS to predict SLF with good accuracy, fewer cases in the gray zone, and a straightforward clinical interpretation. NFS could be used for the initial evaluation of the treatment priority.

  6. Random forests, a novel approach for discrimination of fish populations using parasites as biological tags.

    PubMed

    Perdiguero-Alonso, Diana; Montero, Francisco E; Kostadinova, Aneta; Raga, Juan Antonio; Barrett, John

    2008-10-01

    Due to the complexity of host-parasite relationships, discrimination between fish populations using parasites as biological tags is difficult. This study introduces, to our knowledge for the first time, random forests (RF) as a new modelling technique in the application of parasite community data as biological markers for population assignment of fish. This novel approach is applied to a dataset with a complex structure comprising 763 parasite infracommunities in population samples of Atlantic cod, Gadus morhua, from the spawning/feeding areas in five regions in the North East Atlantic (Baltic, Celtic, Irish and North seas and Icelandic waters). The learning behaviour of RF is evaluated in comparison with two other algorithms applied to class assignment problems, the linear discriminant function analysis (LDA) and artificial neural networks (ANN). The three algorithms are used to develop predictive models applying three cross-validation procedures in a series of experiments (252 models in total). The comparative approach to RF, LDA and ANN algorithms applied to the same datasets demonstrates the competitive potential of RF for developing predictive models since RF exhibited better accuracy of prediction and outperformed LDA and ANN in the assignment of fish to their regions of sampling using parasite community data. The comparative analyses and the validation experiment with a 'blind' sample confirmed that RF models performed more effectively with a large and diverse training set and a large number of variables. The discrimination results obtained for a migratory fish species with largely overlapping parasite communities reflects the high potential of RF for developing predictive models using data that are both complex and noisy, and indicates that it is a promising tool for parasite tag studies. Our results suggest that parasite community data can be used successfully to discriminate individual cod from the five different regions of the North East Atlantic studied using RF.

  7. CLUSTERnGO: a user-defined modelling platform for two-stage clustering of time-series data.

    PubMed

    Fidaner, Işık Barış; Cankorur-Cetinkaya, Ayca; Dikicioglu, Duygu; Kirdar, Betul; Cemgil, Ali Taylan; Oliver, Stephen G

    2016-02-01

    Simple bioinformatic tools are frequently used to analyse time-series datasets regardless of their ability to deal with transient phenomena, limiting the meaningful information that may be extracted from them. This situation requires the development and exploitation of tailor-made, easy-to-use and flexible tools designed specifically for the analysis of time-series datasets. We present a novel statistical application called CLUSTERnGO, which uses a model-based clustering algorithm that fulfils this need. This algorithm involves two components of operation. Component 1 constructs a Bayesian non-parametric model (Infinite Mixture of Piecewise Linear Sequences) and Component 2, which applies a novel clustering methodology (Two-Stage Clustering). The software can also assign biological meaning to the identified clusters using an appropriate ontology. It applies multiple hypothesis testing to report the significance of these enrichments. The algorithm has a four-phase pipeline. The application can be executed using either command-line tools or a user-friendly Graphical User Interface. The latter has been developed to address the needs of both specialist and non-specialist users. We use three diverse test cases to demonstrate the flexibility of the proposed strategy. In all cases, CLUSTERnGO not only outperformed existing algorithms in assigning unique GO term enrichments to the identified clusters, but also revealed novel insights regarding the biological systems examined, which were not uncovered in the original publications. The C++ and QT source codes, the GUI applications for Windows, OS X and Linux operating systems and user manual are freely available for download under the GNU GPL v3 license at http://www.cmpe.boun.edu.tr/content/CnG. sgo24@cam.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  8. QPO observations related to neutron star equations of state

    NASA Astrophysics Data System (ADS)

    Stuchlik, Zdenek; Urbanec, Martin; Török, Gabriel; Bakala, Pavel; Cermak, Petr

    We apply a genetic algorithm method for selection of neutron star models relating them to the resonant models of the twin peak quasiperiodic oscillations observed in the X-ray neutron star binary systems. It was suggested that pairs of kilo-hertz peaks in the X-ray Fourier power density spectra of some neutron stars reflect a non-linear resonance between two modes of accretion disk oscillations. We investigate this concept for a specific neutron star source. Each neutron star model is characterized by the equation of state (EOS), rotation frequency Ω and central energy density ρc . These determine the spacetime structure governing geodesic motion and position dependent radial and vertical epicyclic oscillations related to the stable circular geodesics. Particular kinds of resonances (KR) between the oscillations with epicyclic frequencies, or the frequencies derived from them, can take place at special positions assigned ambiguously to the spacetime structure. The pairs of resonant eigenfrequencies relevant to those positions are therefore fully given by KR,ρc , Ω, EOS and can be compared to the observationally determined pairs of eigenfrequencies in order to eliminate the unsatisfactory sets (KR,ρc , Ω, EOS). For the elimination we use the advanced genetic algorithm. Genetic algorithm comes out from the method of natural selection when subjects with the best adaptation to assigned conditions have most chances to survive. The chosen genetic algorithm with sexual reproduction contains one chromosome with restricted lifetime, uniform crossing and genes of type 3/3/5. For encryption of physical description (KR,ρ, Ω, EOS) into chromosome we used Gray code. As a fitness function we use correspondence between the observed and calculated pairs of eigenfrequencies.

  9. Neutron star equation of state and QPO observations

    NASA Astrophysics Data System (ADS)

    Urbanec, Martin; Stuchlík, Zdeněk; Török, Gabriel; Bakala, Pavel; Čermák, Petr

    2007-12-01

    Assuming a resonant origin of the twin peak quasiperiodic oscillations observed in the X-ray neutron star binary systems, we apply a genetic algorithm method for selection of neutron star models. It was suggested that pairs of kilohertz peaks in the X-ray Fourier power density spectra of some neutron stars reflect a non-linear resonance between two modes of accretion disk oscillations. We investigate this concept for a specific neutron star source. Each neutron star model is characterized by the equation of state (EOS), rotation frequency Ω and central energy density rho_{c}. These determine the spacetime structure governing geodesic motion and position dependent radial and vertical epicyclic oscillations related to the stable circular geodesics. Particular kinds of resonances (KR) between the oscillations with epicyclic frequencies, or the frequencies derived from them, can take place at special positions assigned ambiguously to the spacetime structure. The pairs of resonant eigenfrequencies relevant to those positions are therefore fully given by KR, rho_{c}, Ω, EOS and can be compared to the observationally determined pairs of eigenfrequencies in order to eliminate the unsatisfactory sets (KR, rho_{c}, Ω, EOS). For the elimination we use the advanced genetic algorithm. Genetic algorithm comes out from the method of natural selection when subjects with the best adaptation to assigned conditions have most chances to survive. The chosen genetic algorithm with sexual reproduction contains one chromosome with restricted lifetime, uniform crossing and genes of type 3/3/5. For encryption of physical description (KR, rho_{c}, Ω, EOS) into the chromosome we use the Gray code. As a fitness function we use correspondence between the observed and calculated pairs of eigenfrequencies.

  10. Technologies for network-centric C4ISR

    NASA Astrophysics Data System (ADS)

    Dunkelberger, Kirk A.

    2003-07-01

    Three technologies form the heart of any network-centric command, control, communication, intelligence, surveillance, and reconnaissance (C4ISR) system: distributed processing, reconfigurable networking, and distributed resource management. Distributed processing, enabled by automated federation, mobile code, intelligent process allocation, dynamic multiprocessing groups, check pointing, and other capabilities creates a virtual peer-to-peer computing network across the force. Reconfigurable networking, consisting of content-based information exchange, dynamic ad-hoc routing, information operations (perception management) and other component technologies forms the interconnect fabric for fault tolerant inter processor and node communication. Distributed resource management, which provides the means for distributed cooperative sensor management, foe sensor utilization, opportunistic collection, symbiotic inductive/deductive reasoning and other applications provides the canonical algorithms for network-centric enterprises and warfare. This paper introduces these three core technologies and briefly discusses a sampling of their component technologies and their individual contributions to network-centric enterprises and warfare. Based on the implied requirements, two new algorithms are defined and characterized which provide critical building blocks for network centricity: distributed asynchronous auctioning and predictive dynamic source routing. The first provides a reliable, efficient, effective approach for near-optimal assignment problems; the algorithm has been demonstrated to be a viable implementation for ad-hoc command and control, object/sensor pairing, and weapon/target assignment. The second is founded on traditional dynamic source routing (from mobile ad-hoc networking), but leverages the results of ad-hoc command and control (from the contributed auctioning algorithm) into significant increases in connection reliability through forward prediction. Emphasis is placed on the advantages gained from the closed-loop interaction of the multiple technologies in the network-centric application environment.

  11. 13Check_RNA: A tool to evaluate 13C chemical shifts assignments of RNA.

    PubMed

    Icazatti, A A; Martin, O A; Villegas, M; Szleifer, I; Vila, J A

    2018-06-19

    Chemical shifts (CS) are an important source of structural information of macromolecules such as RNA. In addition to the scarce availability of CS for RNA, the observed values are prone to errors due to a wrong re-calibration or miss assignments. Different groups have dedicated their efforts to correct CS systematic errors on RNA. Despite this, there are not automated and freely available algorithms for correct assignments of RNA 13C CS before their deposition to the BMRB or re-reference already deposited CS with systematic errors. Based on an existent method we have implemented an open source python module to correct 13C CS (from here on 13Cexp) systematic errors of RNAs and then return the results in 3 formats including the nmrstar one. This software is available on GitHub at https://github.com/BIOS-IMASL/13Check_RNA under a MIT license. Supplementary data are available at Bioinformatics online.

  12. Fleet Assignment Using Collective Intelligence

    NASA Technical Reports Server (NTRS)

    Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.

    2004-01-01

    Product distribution theory is a new collective intelligence-based framework for analyzing and controlling distributed systems. Its usefulness in distributed stochastic optimization is illustrated here through an airline fleet assignment problem. This problem involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of linear and non-linear constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of this new stochastic optimization algorithm to a non-linear objective cold start fleet assignment problem. Results show that the optimizer can successfully solve such highly-constrained problems (130 variables, 184 constraints).

  13. An algorithm for identification and classification of individuals with type 1 and type 2 diabetes mellitus in a large primary care database

    PubMed Central

    Sharma, Manuj; Petersen, Irene; Nazareth, Irwin; Coton, Sonia J

    2016-01-01

    Background Research into diabetes mellitus (DM) often requires a reproducible method for identifying and distinguishing individuals with type 1 DM (T1DM) and type 2 DM (T2DM). Objectives To develop a method to identify individuals with T1DM and T2DM using UK primary care electronic health records. Methods Using data from The Health Improvement Network primary care database, we developed a two-step algorithm. The first algorithm step identified individuals with potential T1DM or T2DM based on diagnostic records, treatment, and clinical test results. We excluded individuals with records for rarer DM subtypes only. For individuals to be considered diabetic, they needed to have at least two records indicative of DM; one of which was required to be a diagnostic record. We then classified individuals with T1DM and T2DM using the second algorithm step. A combination of diagnostic codes, medication prescribed, age at diagnosis, and whether the case was incident or prevalent were used in this process. We internally validated this classification algorithm through comparison against an independent clinical examination of The Health Improvement Network electronic health records for a random sample of 500 DM individuals. Results Out of 9,161,866 individuals aged 0–99 years from 2000 to 2014, we classified 37,693 individuals with T1DM and 418,433 with T2DM, while 1,792 individuals remained unclassified. A small proportion were classified with some uncertainty (1,155 [3.1%] of all individuals with T1DM and 6,139 [1.5%] with T2DM) due to unclear health records. During validation, manual assignment of DM type based on clinical assessment of the entire electronic record and algorithmic assignment led to equivalent classification in all instances. Conclusion The majority of individuals with T1DM and T2DM can be readily identified from UK primary care electronic health records. Our approach can be adapted for use in other health care settings. PMID:27785102

  14. An algorithm for identification and classification of individuals with type 1 and type 2 diabetes mellitus in a large primary care database.

    PubMed

    Sharma, Manuj; Petersen, Irene; Nazareth, Irwin; Coton, Sonia J

    2016-01-01

    Research into diabetes mellitus (DM) often requires a reproducible method for identifying and distinguishing individuals with type 1 DM (T1DM) and type 2 DM (T2DM). To develop a method to identify individuals with T1DM and T2DM using UK primary care electronic health records. Using data from The Health Improvement Network primary care database, we developed a two-step algorithm. The first algorithm step identified individuals with potential T1DM or T2DM based on diagnostic records, treatment, and clinical test results. We excluded individuals with records for rarer DM subtypes only. For individuals to be considered diabetic, they needed to have at least two records indicative of DM; one of which was required to be a diagnostic record. We then classified individuals with T1DM and T2DM using the second algorithm step. A combination of diagnostic codes, medication prescribed, age at diagnosis, and whether the case was incident or prevalent were used in this process. We internally validated this classification algorithm through comparison against an independent clinical examination of The Health Improvement Network electronic health records for a random sample of 500 DM individuals. Out of 9,161,866 individuals aged 0-99 years from 2000 to 2014, we classified 37,693 individuals with T1DM and 418,433 with T2DM, while 1,792 individuals remained unclassified. A small proportion were classified with some uncertainty (1,155 [3.1%] of all individuals with T1DM and 6,139 [1.5%] with T2DM) due to unclear health records. During validation, manual assignment of DM type based on clinical assessment of the entire electronic record and algorithmic assignment led to equivalent classification in all instances. The majority of individuals with T1DM and T2DM can be readily identified from UK primary care electronic health records. Our approach can be adapted for use in other health care settings.

  15. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    PubMed

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step algorithms can potentially incorporate with different nonlinear differential equation models to reconstruct the GRN.

  16. Active Learning with Irrelevant Examples

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri; Mazzoni, Dominic

    2009-01-01

    An improved active learning method has been devised for training data classifiers. One example of a data classifier is the algorithm used by the United States Postal Service since the 1960s to recognize scans of handwritten digits for processing zip codes. Active learning algorithms enable rapid training with minimal investment of time on the part of human experts to provide training examples consisting of correctly classified (labeled) input data. They function by identifying which examples would be most profitable for a human expert to label. The goal is to maximize classifier accuracy while minimizing the number of examples the expert must label. Although there are several well-established methods for active learning, they may not operate well when irrelevant examples are present in the data set. That is, they may select an item for labeling that the expert simply cannot assign to any of the valid classes. In the context of classifying handwritten digits, the irrelevant items may include stray marks, smudges, and mis-scans. Querying the expert about these items results in wasted time or erroneous labels, if the expert is forced to assign the item to one of the valid classes. In contrast, the new algorithm provides a specific mechanism for avoiding querying the irrelevant items. This algorithm has two components: an active learner (which could be a conventional active learning algorithm) and a relevance classifier. The combination of these components yields a method, denoted Relevance Bias, that enables the active learner to avoid querying irrelevant data so as to increase its learning rate and efficiency when irrelevant items are present. The algorithm collects irrelevant data in a set of rejected examples, then trains the relevance classifier to distinguish between labeled (relevant) training examples and the rejected ones. The active learner combines its ranking of the items with the probability that they are relevant to yield a final decision about which item to present to the expert for labeling. Experiments on several data sets have demonstrated that the Relevance Bias approach significantly decreases the number of irrelevant items queried and also accelerates learning speed.

  17. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network

    PubMed Central

    Lin, Kai; Wang, Di; Hu, Long

    2016-01-01

    With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods. PMID:27376302

  18. A multimedia retrieval framework based on semi-supervised ranking and relevance feedback.

    PubMed

    Yang, Yi; Nie, Feiping; Xu, Dong; Luo, Jiebo; Zhuang, Yueting; Pan, Yunhe

    2012-04-01

    We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.

  19. Automatic delineation of tumor volumes by co-segmentation of combined PET/MR data

    NASA Astrophysics Data System (ADS)

    Leibfarth, S.; Eckert, F.; Welz, S.; Siegel, C.; Schmidt, H.; Schwenzer, N.; Zips, D.; Thorwarth, D.

    2015-07-01

    Combined PET/MRI may be highly beneficial for radiotherapy treatment planning in terms of tumor delineation and characterization. To standardize tumor volume delineation, an automatic algorithm for the co-segmentation of head and neck (HN) tumors based on PET/MR data was developed. Ten HN patient datasets acquired in a combined PET/MR system were available for this study. The proposed algorithm uses both the anatomical T2-weighted MR and FDG-PET data. For both imaging modalities tumor probability maps were derived, assigning each voxel a probability of being cancerous based on its signal intensity. A combination of these maps was subsequently segmented using a threshold level set algorithm. To validate the method, tumor delineations from three radiation oncologists were available. Inter-observer variabilities and variabilities between the algorithm and each observer were quantified by means of the Dice similarity index and a distance measure. Inter-observer variabilities and variabilities between observers and algorithm were found to be comparable, suggesting that the proposed algorithm is adequate for PET/MR co-segmentation. Moreover, taking into account combined PET/MR data resulted in more consistent tumor delineations compared to MR information only.

  20. A fast global fitting algorithm for fluorescence lifetime imaging microscopy based on image segmentation.

    PubMed

    Pelet, S; Previte, M J R; Laiho, L H; So, P T C

    2004-10-01

    Global fitting algorithms have been shown to improve effectively the accuracy and precision of the analysis of fluorescence lifetime imaging microscopy data. Global analysis performs better than unconstrained data fitting when prior information exists, such as the spatial invariance of the lifetimes of individual fluorescent species. The highly coupled nature of global analysis often results in a significantly slower convergence of the data fitting algorithm as compared with unconstrained analysis. Convergence speed can be greatly accelerated by providing appropriate initial guesses. Realizing that the image morphology often correlates with fluorophore distribution, a global fitting algorithm has been developed to assign initial guesses throughout an image based on a segmentation analysis. This algorithm was tested on both simulated data sets and time-domain lifetime measurements. We have successfully measured fluorophore distribution in fibroblasts stained with Hoechst and calcein. This method further allows second harmonic generation from collagen and elastin autofluorescence to be differentiated in fluorescence lifetime imaging microscopy images of ex vivo human skin. On our experimental measurement, this algorithm increased convergence speed by over two orders of magnitude and achieved significantly better fits. Copyright 2004 Biophysical Society

  1. Did death certificates and a death review process agree on lung cancer cause of death in the National Lung Screening Trial?

    PubMed

    Marcus, Pamela M; Doria-Rose, Vincent Paul; Gareen, Ilana F; Brewer, Brenda; Clingan, Kathy; Keating, Kristen; Rosenbaum, Jennifer; Rozjabek, Heather M; Rathmell, Joshua; Sicks, JoRean; Miller, Anthony B

    2016-08-01

    Randomized controlled trials frequently use death review committees to assign a cause of death rather than relying on cause of death information from death certificates. The National Lung Screening Trial, a randomized controlled trial of lung cancer screening with low-dose computed tomography versus chest X-ray for heavy and/or long-term smokers ages 55-74 years at enrollment, used a committee blinded to arm assignment for a subset of deaths to determine whether cause of death was due to lung cancer. Deaths were selected for review using a pre-determined computerized algorithm. The algorithm, which considered cancers diagnosed during the trial, causes and significant conditions listed on the death certificate, and the underlying cause of death derived from death certificate information by trained nosologists, selected deaths that were most likely to represent a death due to lung cancer (either directly or indirectly) and deaths that might have been erroneously assigned lung cancer as the cause of death. The algorithm also selected deaths that might be due to adverse events of diagnostic evaluation for lung cancer. Using the review cause of death as the gold standard and lung cancer cause of death as the outcome of interest (dichotomized as lung cancer versus not lung cancer), we calculated performance measures of the death certificate cause of death. We also recalculated the trial primary endpoint using the death certificate cause of death. In all, 1642 deaths were reviewed and assigned a cause of death (42% of the 3877 National Lung Screening Trial deaths). Sensitivity of death certificate cause of death was 91%; specificity, 97%; positive predictive value, 98%; and negative predictive value, 89%. About 40% of the deaths reclassified to lung cancer cause of death had a death certificate cause of death of a neoplasm other than lung. Using the death certificate cause of death, the lung cancer mortality reduction was 18% (95% confidence interval: 4.2-25.0), as compared with the published finding of 20% (95% confidence interval: 6.7-26.7). Death review may not be necessary for primary-outcome analyses in lung cancer screening trials. If deemed necessary, researchers should strive to streamline the death review process as much as possible. © The Author(s) 2016.

  2. An algorithm for a selective use of throat swabs in the diagnosis of group A streptococcal pharyngo-tonsillitis in general practice.

    PubMed

    Hoffmann, S

    1992-12-01

    A prospective evaluation was made of an algorithm for a selective use of throat swabs in patients with sore throat in general practice. The algorithm states that a throat swab should be obtained (a) in all children younger than 15 years; (b) in patients aged 15 years or more who have pain on swallowing and at least three of four signs (enlarged or hyperaemic tonsils; exudate; enlarged or tender angular lymph nodes; and a temperature > or = 38 degrees C); and (c) in adults aged 15-44 years with pain on swallowing and one or two of the four signs, but not both cough and coryza. Group A streptococci were found by laboratory culture in 30% of throat swabs from 1783 patients. Using these results as the reference, the algorithm was 95% sensitive and 26% specific, and assigned 80% of the patients to be swabbed. Its positive and negative predictive values in this setting were 36% and 92%, respectively. It is concluded that this algorithm may be useful in general practice.

  3. Reducing Earth Topography Resolution for SMAP Mission Ground Tracks Using K-Means Clustering

    NASA Technical Reports Server (NTRS)

    Rizvi, Farheen

    2013-01-01

    The K-means clustering algorithm is used to reduce Earth topography resolution for the SMAP mission ground tracks. As SMAP propagates in orbit, knowledge of the radar antenna footprints on Earth is required for the antenna misalignment calibration. Each antenna footprint contains a latitude and longitude location pair on the Earth surface. There are 400 pairs in one data set for the calibration model. It is computationally expensive to calculate corresponding Earth elevation for these data pairs. Thus, the antenna footprint resolution is reduced. Similar topographical data pairs are grouped together with the K-means clustering algorithm. The resolution is reduced to the mean of each topographical cluster called the cluster centroid. The corresponding Earth elevation for each cluster centroid is assigned to the entire group. Results show that 400 data points are reduced to 60 while still maintaining algorithm performance and computational efficiency. In this work, sensitivity analysis is also performed to show a trade-off between algorithm performance versus computational efficiency as the number of cluster centroids and algorithm iterations are increased.

  4. Man-Made Object Extraction from Remote Sensing Imagery by Graph-Based Manifold Ranking

    NASA Astrophysics Data System (ADS)

    He, Y.; Wang, X.; Hu, X. Y.; Liu, S. H.

    2018-04-01

    The automatic extraction of man-made objects from remote sensing imagery is useful in many applications. This paper proposes an algorithm for extracting man-made objects automatically by integrating a graph model with the manifold ranking algorithm. Initially, we estimate a priori value of the man-made objects with the use of symmetric and contrast features. The graph model is established to represent the spatial relationships among pre-segmented superpixels, which are used as the graph nodes. Multiple characteristics, namely colour, texture and main direction, are used to compute the weights of the adjacent nodes. Manifold ranking effectively explores the relationships among all the nodes in the feature space as well as initial query assignment; thus, it is applied to generate a ranking map, which indicates the scores of the man-made objects. The man-made objects are then segmented on the basis of the ranking map. Two typical segmentation algorithms are compared with the proposed algorithm. Experimental results show that the proposed algorithm can extract man-made objects with high recognition rate and low omission rate.

  5. An automated framework for NMR resonance assignment through simultaneous slice picking and spin system forming.

    PubMed

    Abbas, Ahmed; Guo, Xianrong; Jing, Bing-Yi; Gao, Xin

    2014-06-01

    Despite significant advances in automated nuclear magnetic resonance-based protein structure determination, the high numbers of false positives and false negatives among the peaks selected by fully automated methods remain a problem. These false positives and negatives impair the performance of resonance assignment methods. One of the main reasons for this problem is that the computational research community often considers peak picking and resonance assignment to be two separate problems, whereas spectroscopists use expert knowledge to pick peaks and assign their resonances at the same time. We propose a novel framework that simultaneously conducts slice picking and spin system forming, an essential step in resonance assignment. Our framework then employs a genetic algorithm, directed by both connectivity information and amino acid typing information from the spin systems, to assign the spin systems to residues. The inputs to our framework can be as few as two commonly used spectra, i.e., CBCA(CO)NH and HNCACB. Different from the existing peak picking and resonance assignment methods that treat peaks as the units, our method is based on 'slices', which are one-dimensional vectors in three-dimensional spectra that correspond to certain ([Formula: see text]) values. Experimental results on both benchmark simulated data sets and four real protein data sets demonstrate that our method significantly outperforms the state-of-the-art methods while using a less number of spectra than those methods. Our method is freely available at http://sfb.kaust.edu.sa/Pages/Software.aspx.

  6. A new routing enhancement scheme based on node blocking state advertisement in wavelength-routed WDM networks

    NASA Astrophysics Data System (ADS)

    Hu, Peigang; Jin, Yaohui; Zhang, Chunlei; He, Hao; Hu, WeiSheng

    2005-02-01

    The increasing switching capacity brings the optical node with considerable complexity. Due to the limitation in cost and technology, an optical node is often designed with partial switching capability and partial resource sharing. It means that the node is of blocking to some extent, for example multi-granularity switching node, which in fact is a structure using pass wavelength to reduce the dimension of OXC, and partial sharing wavelength converter (WC) OXC. It is conceivable that these blocking nodes will have great effects on the problem of routing and wavelength assignment. Some previous works studied the blocking case, partial WC OXC, using complicated wavelength assignment algorithm. But the complexities of these schemes decide them to be not in practice in real networks. In this paper, we propose a new scheme based on the node blocking state advertisement to reduce the retry or rerouting probability and improve the efficiency of routing in the networks with blocking nodes. In the scheme, node blocking state are advertised to the other nodes in networks, which will be used for subsequent route calculation to find a path with lowest blocking probability. The performance of the scheme is evaluated using discrete event model in 14-node NSFNET, all the nodes of which employ a kind of partial sharing WC OXC structure. In the simulation, a simple First-Fit wavelength assignment algorithm is used. The simulation results demonstrate that the new scheme considerably reduces the retry or rerouting probability in routing process.

  7. Performance evaluation of distributed wavelength assignment in WDM optical networks

    NASA Astrophysics Data System (ADS)

    Hashiguchi, Tomohiro; Wang, Xi; Morikawa, Hiroyuki; Aoyama, Tomonori

    2004-04-01

    In WDM wavelength routed networks, prior to a data transfer, a call setup procedure is required to reserve a wavelength path between the source-destination node pairs. A distributed approach to a connection setup can achieve a very high speed, while improving the reliability and reducing the implementation cost of the networks. However, along with many advantages, several major challenges have been posed by the distributed scheme in how the management and allocation of wavelength could be efficiently carried out. In this thesis, we apply a distributed wavelength assignment algorithm named priority based wavelength assignment (PWA) that was originally proposed for the use in burst switched optical networks to the problem of reserving wavelengths of path reservation protocols in the distributed control optical networks. Instead of assigning wavelengths randomly, this approach lets each node select the "safest" wavelengths based on the information of wavelength utilization history, thus unnecessary future contention is prevented. The simulation results presented in this paper show that the proposed protocol can enhance the performance of the system without introducing any apparent drawbacks.

  8. Meaningless comparisons lead to false optimism in medical machine learning

    PubMed Central

    Kording, Konrad; Recht, Benjamin

    2017-01-01

    A new trend in medicine is the use of algorithms to analyze big datasets, e.g. using everything your phone measures about you for diagnostics or monitoring. However, these algorithms are commonly compared against weak baselines, which may contribute to excessive optimism. To assess how well an algorithm works, scientists typically ask how well its output correlates with medically assigned scores. Here we perform a meta-analysis to quantify how the literature evaluates their algorithms for monitoring mental wellbeing. We find that the bulk of the literature (∼77%) uses meaningless comparisons that ignore patient baseline state. For example, having an algorithm that uses phone data to diagnose mood disorders would be useful. However, it is possible to explain over 80% of the variance of some mood measures in the population by simply guessing that each patient has their own average mood—the patient-specific baseline. Thus, an algorithm that just predicts that our mood is like it usually is can explain the majority of variance, but is, obviously, entirely useless. Comparing to the wrong (population) baseline has a massive effect on the perceived quality of algorithms and produces baseless optimism in the field. To solve this problem we propose “user lift” that reduces these systematic errors in the evaluation of personalized medical monitoring. PMID:28949964

  9. The IMS Software Integration Platform

    DTIC Science & Technology

    1993-04-12

    products to incorporate all data shared by the IMS applications. Some entities (time-series, images, a algorithm -specific parameters) must be managed...dbwhoanii, dbcancel Transaction Management: dbcommit, dbrollback Key Counter Assignment: dbgetcounter String Handling: cstr ~to~pad, pad-to- cstr Error...increment *value; String Maniputation: int cstr topad (array, string, arraylength) char *array, *string; int arrayjlength; int pad tocstr (string

  10. Cyber Vigilance: The Human Factor

    DTIC Science & Technology

    2016-10-21

    88ABW-2014-5661; American Intelligence Journal 14. Cyber-defenders face lengthy, repetitive work assignments with few critical signals and little...research is inadvisable. To understand this unique domain, we asked participants to perform a simulated cybersecurity task, searching for attack...detection. To avoid this, IDS detection algorithms are purposely liberal, broadly flagging any activity that resembles a known American Intelligence

  11. Detecting and accounting for multiple sources of positional variance in peak list registration analysis and spin system grouping.

    PubMed

    Smelter, Andrey; Rouchka, Eric C; Moseley, Hunter N B

    2017-08-01

    Peak lists derived from nuclear magnetic resonance (NMR) spectra are commonly used as input data for a variety of computer assisted and automated analyses. These include automated protein resonance assignment and protein structure calculation software tools. Prior to these analyses, peak lists must be aligned to each other and sets of related peaks must be grouped based on common chemical shift dimensions. Even when programs can perform peak grouping, they require the user to provide uniform match tolerances or use default values. However, peak grouping is further complicated by multiple sources of variance in peak position limiting the effectiveness of grouping methods that utilize uniform match tolerances. In addition, no method currently exists for deriving peak positional variances from single peak lists for grouping peaks into spin systems, i.e. spin system grouping within a single peak list. Therefore, we developed a complementary pair of peak list registration analysis and spin system grouping algorithms designed to overcome these limitations. We have implemented these algorithms into an approach that can identify multiple dimension-specific positional variances that exist in a single peak list and group peaks from a single peak list into spin systems. The resulting software tools generate a variety of useful statistics on both a single peak list and pairwise peak list alignment, especially for quality assessment of peak list datasets. We used a range of low and high quality experimental solution NMR and solid-state NMR peak lists to assess performance of our registration analysis and grouping algorithms. Analyses show that an algorithm using a single iteration and uniform match tolerances approach is only able to recover from 50 to 80% of the spin systems due to the presence of multiple sources of variance. Our algorithm recovers additional spin systems by reevaluating match tolerances in multiple iterations. To facilitate evaluation of the algorithms, we developed a peak list simulator within our nmrstarlib package that generates user-defined assigned peak lists from a given BMRB entry or database of entries. In addition, over 100,000 simulated peak lists with one or two sources of variance were generated to evaluate the performance and robustness of these new registration analysis and peak grouping algorithms.

  12. A mixed-mode traffic assignment model with new time-flow impedance function

    NASA Astrophysics Data System (ADS)

    Lin, Gui-Hua; Hu, Yu; Zou, Yuan-Yang

    2018-01-01

    Recently, with the wide adoption of electric vehicles, transportation network has shown different characteristics and been further developed. In this paper, we present a new time-flow impedance function, which may be more realistic than the existing time-flow impedance functions. Based on this new impedance function, we present an optimization model for a mixed-mode traffic network in which battery electric vehicles (BEVs) and gasoline vehicles (GVs) are chosen. We suggest two approaches to handle the model: One is to use the interior point (IP) algorithm and the other is to employ the sequential quadratic programming (SQP) algorithm. Three numerical examples are presented to illustrate the efficiency of these approaches. In particular, our numerical results show that more travelers prefer to choosing BEVs when the distance limit of BEVs is long enough and the unit operating cost of GVs is higher than that of BEVs, and the SQP algorithm is faster than the IP algorithm.

  13. Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images

    NASA Astrophysics Data System (ADS)

    Yao, Shoukui; Qin, Xiaojuan

    2018-02-01

    Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.

  14. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.

    PubMed

    Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.

  15. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem

    PubMed Central

    Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849

  16. Traffic sharing algorithms for hybrid mobile networks

    NASA Technical Reports Server (NTRS)

    Arcand, S.; Murthy, K. M. S.; Hafez, R.

    1995-01-01

    In a hybrid (terrestrial + satellite) mobile personal communications networks environment, a large size satellite footprint (supercell) overlays on a large number of smaller size, contiguous terrestrial cells. We assume that the users have either a terrestrial only single mode terminal (SMT) or a terrestrial/satellite dual mode terminal (DMT) and the ratio of DMT to the total terminals is defined gamma. It is assumed that the call assignments to and handovers between terrestrial cells and satellite supercells take place in a dynamic fashion when necessary. The objectives of this paper are twofold, (1) to propose and define a class of traffic sharing algorithms to manage terrestrial and satellite network resources efficiently by handling call handovers dynamically, and (2) to analyze and evaluate the algorithms by maximizing the traffic load handling capability (defined in erl/cell) over a wide range of terminal ratios (gamma) given an acceptable range of blocking probabilities. Two of the algorithms (G & S) in the proposed class perform extremely well for a wide range of gamma.

  17. Annealing Ant Colony Optimization with Mutation Operator for Solving TSP

    PubMed Central

    2016-01-01

    Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality. PMID:27999590

  18. The effectiveness of a new algorithm on a three-dimensional finite element model construction of bone trabeculae in implant biomechanics.

    PubMed

    Sato, Y; Teixeira, E R; Tsuga, K; Shindoi, N

    1999-08-01

    More validity of finite element analysis (FEA) in implant biomechanics requires element downsizing. However, excess downsizing needs computer memory and calculation time. To evaluate the effectiveness of a new algorithm established for more valid FEA model construction without downsizing, three-dimensional FEA bone trabeculae models with different element sizes (300, 150 and 75 micron) were constructed. Four algorithms of stepwise (1 to 4 ranks) assignment of Young's modulus accorded with bone volume in the individual cubic element was used and then stress distribution against vertical loading was analysed. The model with 300 micron element size, with 4 ranks of Young's moduli accorded with bone volume in each element presented similar stress distribution to the model with the 75 micron element size. These results show that the new algorithm was effective, and the use of the 300 micron element for bone trabeculae representation was proposed, without critical changes in stress values and for possible savings on computer memory and calculation time in the laboratory.

  19. Distributed Optimal Dispatch of Distributed Energy Resources Over Lossy Communication Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Junfeng; Yang, Tao; Wu, Di

    In this paper, we consider the economic dispatch problem (EDP), where a cost function that is assumed to be strictly convex is assigned to each of distributed energy resources (DERs), over packet dropping networks. The goal of a standard EDP is to minimize the total generation cost while meeting total demand and satisfying individual generator output limit. We propose a distributed algorithm for solving the EDP over networks. The proposed algorithm is resilient against packet drops over communication links. Under the assumption that the underlying communication network is strongly connected with a positive probability and the packet drops are independentmore » and identically distributed (i.i.d.), we show that the proposed algorithm is able to solve the EDP. Numerical simulation results are used to validate and illustrate the main results of the paper.« less

  20. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    NASA Astrophysics Data System (ADS)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

Top