Author: Holme, O.
Paper Title Page
TUA3O01 Detector Controls Meets JEE on the Web 1
 
  • F. Glege, A. Andronidis, O. Chaze, C. Deldicque, M. Dobson, A.D. Dupont, D. Gigi, J. Hegeman, O. Holme, M. Janulis, R.J. Jiménez Estupiñán, L. Masetti, F. Meijers, E. Meschi, S. Morovic, C. Nunez-Barranco-Fernandez, L. Orsini, A. Petrucci, A. Racz, P. Roberts, H. Sakulin, C. Schwick, B. Stieger, S. Zaza, P. Zejdl
    CERN, Geneva, Switzerland
  • J.M. Andre, R.K. Mommsen, V. O'Dell
    Fermilab, Batavia, Illinois, USA
  • U. Behrens
    DESY, Hamburg, Germany
  • J. Branson, S. Cittolin, A. Holzner, M. Pieri
    UCSD, La Jolla, California, USA
  • G.L. Darlea, G. Gomez-Ceballos, C. Paus, J. Veverka
    MIT, Cambridge, Massachusetts, USA
  • S. Erhan
    UCLA, Los Angeles, California, USA
 
  Remote monitoring and controls has always been an important aspect of physics detector controls since it was available. Due to the complexity of the systems, the 24/7 running requirements and limited human resources, remote access to perform interventions is essential. The amount of data to visualize, the required visualization types and cybersecurity standards demand a professional, complete solution. Using the example of the integration of the CMS detector controls system into our ORACLE WebCenter infrastructure, the mechanisms and tools available for integration with controls systems shall be discussed. Authentication has been delegated to WebCenter and authorization been shared between web server and control system. Session handling exists in either system and has to be matched. Concurrent access by multiple users has to be handled. The underlying JEE infrastructure is specialized in visualization and information sharing. On the other hand, the structure of a JEE system resembles a distributed controls system. Therefore an outlook shall be given on tasks which could be covered by the web servers rather than the controls system.  
slides icon Slides TUA3O01 [2.606 MB]  
 
MOPGF016 Improving the Compact Muon Solenoid Electromagnetic Calorimeter Control and Safety Systems for the Large Hadron Collider Run 2 1
 
  • D.R.S. Di Calafiori, G. Dissertori, L. Djambazov, O. Holme, W. Lustermann
    ETH, Zurich, Switzerland
  • P. Adzic, P. Cirkovic, D. Jovanovic
    VINCA, Belgrade, Serbia
  • S. Zelepoukine
    UW-Madison/PD, Madison, Wisconsin, USA
 
  Funding: Swiss National Science Foundation (SNSF); Ministry of Education, Science and Technological Development of Serbia
The first long shutdown of the Large Hadron Collider (LS1, 2013-2015) provided an opportunity for significant upgrades of the detector control and safety systems of the CMS Electromagnetic Calorimeter. A thorough evaluation was undertaken, building upon experience acquired during several years of detector operations. Substantial improvements were made to the monitoring systems in order to extend readout ranges and provide improved monitoring precision and data reliability. Additional remotely controlled hardware devices and automatic software routines were implemented to optimize the detector recovery time in the case of failures. The safety system was prepared in order to guarantee full support for both commercial off-the-shelf and custom hardware components throughout the next accelerator running period. The software applications were modified to operate on redundant host servers, to fulfil new requirements of the experiment. User interface extensions were also added to provide a more complete overview of the control system. This paper summarises the motivation, implementation and validation of the major improvements made to the hardware and software components during the LS1 and the early data-taking period of LHC Run 2.
 
poster icon Poster MOPGF016 [2.392 MB]  
 
MOPGF025 Enhancing the Detector Control System of the CMS Experiment with Object Oriented Modelling 1
 
  • R.J. Jiménez Estupiñán, A. Andronidis, O. Chaze, C. Deldicque, M. Dobson, A.D. Dupont, D. Gigi, F. Glege, J. Hegeman, M. Janulis, L. Masetti, F. Meijers, E. Meschi, S. Morovic, C. Nunez-Barranco-Fernandez, L. Orsini, A. Petrucci, A. Racz, P. Roberts, H. Sakulin, C. Schwick, B. Stieger, S. Zaza, P. Zejdl
    CERN, Geneva, Switzerland
  • J.M. Andre, R.K. Mommsen, V. O'Dell, P. Zejdl
    Fermilab, Batavia, Illinois, USA
  • U. Behrens
    DESY, Hamburg, Germany
  • J. Branson, S. Cittolin, A. Holzner, M. Pieri
    UCSD, La Jolla, California, USA
  • G.L. Darlea, G. Gomez-Ceballos, C. Paus, K. Sumorok, J. Veverka
    MIT, Cambridge, Massachusetts, USA
  • S. Erhan
    UCLA, Los Angeles, California, USA
  • O. Holme
    ETH, Zurich, Switzerland
 
  WinCC Open Architecture (WinCC OA) is used at CERN as the solution for many control system developments. This product models the process variables in structures known as data points and offers a custom procedural scripting language, called Control Language (CTRL). CTRL is also the language to program functionality of the native user interfaces (UI) and is used by the WinCC OA based CERN control system frameworks. CTRL does not support object oriented (OO) modeling by default. A lower level OO application programming interface (API) is provided, but requires significantly more expertise and development effort than CTRL. The Detector Control System group of the CMS experiment has developed CMSfwClass, a programming toolkit which adds OO behavior to the data points and CTRL. CMSfwClass reduces the semantic gap between high level software design and the application domain. It increases maintainability, encapsulation, reusability and abstraction. This paper presents the details of the implementation as well as the benefits and use cases of CMSfwClass.  
poster icon Poster MOPGF025 [1.436 MB]  
 
MOPGF120 CAN Over Ethernet Gateways: A Convenient and Flexible Solution to Access Low Level Control Devices 1
 
  • G. Thomas, D. Davids
    CERN, Geneva, Switzerland
  • O. Holme
    ETH, Zurich, Switzerland
 
  CAN bus is a recommended fieldbus at CERN. It is widely used in the control systems of the experiments to control and monitor large amounts of equipment (IO devices, front-end electronics, power supplies). CAN nodes are distributed over busses that are interfaced to the computers via PCI or USB CAN interfaces. These interfaces limit the possible evolution of the Detector Control Systems (DCS). For instance, PCI cards are not compatible with all computer hardware and new requirements for virtualization and redundancy require dynamic reallocation of CAN bus interfaces to different computers. Additionally, these interfaces cannot be installed at a different location than the front-end computers. Ethernet based CAN interfaces resolve these issues, providing network access to the field busses. The Ethernet-CAN gateways from Analytica (GmbH) were evaluated to determine if they meet the hardware and software specifications of CERN. This paper presents the evaluation methodology and results as well as highlighting the benefits of using such gateways in experiment production environments. Preliminary experience with the Analytica interfaces in the DCS of the CMS experiment is presented.  
poster icon Poster MOPGF120 [3.051 MB]  
 
WEPGF013 Increasing Availability by Implementing Software Redundancy in the CMS Detector Control System 1
 
  • L. Masetti, A. Andronidis, O. Chaze, C. Deldicque, M. Dobson, A.D. Dupont, D. Gigi, F. Glege, J. Hegeman, M. Janulis, R.J. Jiménez Estupiñán, F. Meijers, E. Meschi, S. Morovic, C. Nunez-Barranco-Fernandez, L. Orsini, A. Petrucci, A. Racz, P. Roberts, H. Sakulin, C. Schwick, B. Stieger, S. Zaza, P. Zejdl
    CERN, Geneva, Switzerland
  • J.M. Andre, R.K. Mommsen, V. O'Dell, P. Zejdl
    Fermilab, Batavia, Illinois, USA
  • U. Behrens
    DESY, Hamburg, Germany
  • J. Branson, S. Cittolin, A. Holzner, M. Pieri
    UCSD, La Jolla, California, USA
  • G.L. Darlea, G. Gomez-Ceballos, C. Paus, K. Sumorok, J. Veverka
    MIT, Cambridge, Massachusetts, USA
  • S. Erhan
    UCLA, Los Angeles, California, USA
  • O. Holme
    ETH, Zurich, Switzerland
 
  Funding: Swiss National Science Foundation (SNSF).
The Detector Control System (DCS) of the Compact Muon Solenoid (CMS) experiment ran with high availability throughout the first physics data-taking period of the Large Hadron Collider (LHC). This was achieved through the consistent improvement of the control software and the provision of a 24-hour expert on-call service. One remaining potential cause of significant downtime was the failure of the computers hosting the DCS software. To minimize the impact of these failures after the restart of the LHC in 2015, it was decided to implement a redundant software layer for the control system where two computers host each DCS application. By customizing and extending the redundancy concept offered by WinCC Open Architecture (WinCC OA), the CMS DCS can now run in a fully redundant software configuration. The implementation involves one host being active, handling all monitoring and control tasks, with the second host running in a minimally functional, passive configuration. Data from the active host is constantly copied to the passive host to enable a rapid switchover as needed. This paper describes details of the implementation and practical experience of redundancy in the CMS DCS.
 
poster icon Poster WEPGF013 [1.725 MB]