Keyword: Linux
Paper Title Other Keywords Page
MOD3O02 Continuous Delivery at SOLEIL software, operation, controls, monitoring 1
 
  • G. Abeillé, A. Buteau, X. Elattaoui, S. Lê
    SOLEIL, Gif-sur-Yvette, France
  • G. Boissinot
    ZENIKA, Paris, France
 
  IT Department of Synchrotron SOLEIL* is structured along of a team of software developers responsible for the development and maintenance of all software from hardware controls up to supervision applications. With a very heterogonous environment such as, several software languages, strongly coupled components and an increasing number of releases, it has become mandatory to standardize the entire development process through a 'Continuous Delivery approach'; making it easy to release and deploy on time at any time. We achieved our objectives by building up a Continuous Delivery system around two aspects, Deployment Pipeline** and DevOps***. A deployment pipeline is achievable by extensively automating all stages of the delivery process (the continuous integration of software, the binaries build and the integration tests). Another key point of Continuous Delivery is also a close collaboration between software developers and system administrators, often known as the DevOps movement. This paper details the feedbacks on this Continuous Delivery approach has been adopted, modifying daily development team life and give an overview of the future steps.
*http://www.synchrotron-soleil.fr/.**http://martinfowler.com/bliki/DeploymentPipeline.html***https://sdarchitect.wordpress.com/2012/07/24/understanding-devops-part-1-defining-devops/.
 
slides icon Slides MOD3O02 [1.882 MB]  
 
MOPGF019 Experiences and Lessons Learned in Transitioning Beamline Front-Ends from VMEbus to Modular Distributed I/O controls, network, PLC, interface 1
 
  • I.J. Gillingham, T. Friedrich, S.C. Lay, R. Mercado
    DLS, Oxfordshire, United Kingdom
 
  Historically Diamond's photon front-ends have adopted control systems based on the VMEbus platform. With increasing pressure towards improved system versatility, space constraints and the issues of long term support for the VME platform, a programme of migration to distributed remote I/O control systems was undertaken. This paper reports on the design strategies, benefits and issues addressed since the new design has been operational.  
poster icon Poster MOPGF019 [0.369 MB]  
 
MOPGF027 Real-Time EtherCAT Driver for EPICS and Embedded Linux at Paul Scherrer Institute (PSI) EPICS, controls, real-time, interface 1
 
  • D. Maier-Manojlovic
    PSI, Villigen, Villigen, Switzerland
 
  EtherCAT bus and interface are widely used for external module and device control in accelerator environments at PSI, ranging from undulator communication, over basic I/O control to Machine Protection System for the new SwissFEL accelerator. A new combined EPICS/Linux driver has been developed at PSI, to allow for simple and mostly automatic setup of various EtherCAT configurations. The new driver is capable of automatic scanning of the existing device and module layout, followed by self-configuration and finally autonomous operation of the EtherCAT bus real-time loop. If additional configuration is needed, the driver offers both user- and kernel-space APIs, as well as the command line interface for fast configuration or reading/writing the module entries. The EtherCAT modules and their data objects (entries) are completely exposed by the driver, with each entry corresponding to a virtual file in the Linux procfs file system. This way, any user application can read or write the EtherCAT entries in a simple manner, even without using any of the supplied APIs. Finally, the driver offers EPICS interface with automatic template generation from the scanned EtherCAT configuration.  
poster icon Poster MOPGF027 [30.572 MB]  
 
MOPGF033 New Developments on EPICS Drivers, Clients and Tools at SESAME EPICS, controls, timing, Ethernet 1
 
  • I. Saleh, Y.S. Dabain, A. Ismail
    SESAME, Allan, Jordan
 
  SESAME is a 2.5 GeV synchrotron light source under construction in Allan, Jordan. The control system of SESAME is based on EPICS and CSS. Various developments in EPICS drivers, clients, software tools and hardware have been done. This paper will present some of the main achievements: new linux-x86 EPICS drivers and soft IOCS developed for the Micro-Research Finland event timing system replacing the VME/VxWorks-based drivers; new EPICS drivers and clients developed for the Basler GigE cameras; an IOC deployment and management driver developed to monitor the numerous virtual machines running the soft IOCs, and to ease deployment of updates to these IOCs; an automated EPICS checking tool developed to aid in the review, validation and application of the in-house rules for all record databases; a new EPICS record type (mbbi2) developed to provide alarm features missing from the multibit binary records found in the base distribution of EPICS; and a test of feasibility for replacing serial terminal servers with low-cost computers.  
poster icon Poster MOPGF033 [0.954 MB]  
 
MOPGF057 Quick Experiment Automation Made Possible Using FPGA in LNLS FPGA, software, experiment, EPICS 1
 
  • M.P. Donadio, J.R. Piton, H.D. de Almeida
    LNLS, Campinas, Brazil
 
  Beamlines in LNLS are being modernized to use the synchrotron light as efficiently as possible. As the photon flux increases, experiment speed constraints become more visible to the user. Experiment control has been done by ordinary computers, under a conventional operating system, running high-level software written in most common programming languages. This architecture presents some time issues as computer is subject to interruptions from input devices like mouse, keyboard or network. The programs quickly became the bottleneck of the experiment. To improve experiment control and automation speed, we transferred software algorithms to a FPGA device. FPGAs are semiconductor devices based around a matrix of logic blocks reconfigurable by software. The results of using a NI Compact RIO device with FPGA programmed through LabVIEW for adopting this technology and future improvements are briefly shown in this paper.  
poster icon Poster MOPGF057 [5.360 MB]  
 
MOPGF070 Report on Control/DAQ Software Design and Current State of Implementation for the Percival Detector. detector, controls, software, EPICS 1
 
  • A.S. Palaha, C. Angelsen, Q. Gu, J. Marchal, U.K. Pedersen, N.P. Rees, N. Tartoni, H. Yousef
    DLS, Oxfordshire, United Kingdom
  • M. Bayer, J. Correa, P. Gnadt, H. Graafsma, P. Göttlicher, S. Lange, A. Marras, S. Řeža, I. Shevyakov, S. Smoljanin, L. Stebel, C. Wunderer, Q. Xia, M. Zimmer
    DESY, Hamburg, Germany
  • G. Cautero, D. Giuressi, A. Khromova, R.H. Menk, G. Pinaroli
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
  • D. Das, N. Guerrini, B. Marsh, T.C. Nicholls, I. Sedgwick, R. Turchetta
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
  • H.J. Hyun, K.S. Kim, S.Y. Rah
    PAL, Pohang, Republic of Korea
 
  The increased brilliance of state-of-the-art Synchrotron radiation sources and Free Electron Lasers require imaging detectors capable of taking advantage of these light source facilities. The PERCIVAL ("Pixelated Energy Resolving CMOS Imager, Versatile and Large") detector is being developed in collaboration between DESY, Elettra Sincrotrone Trieste, Diamond Light Source and Pohang Accelerator Laboratory. It is a CMOS detector targeting soft X-rays < 1 KeV, with a high resolution of up to 13 M pixels reading out at 120 Hz, producing a challenging data rate of 6 GB/s. The controls and data acquisition system will include a SDK to allow integration with third party control systems like Tango and DOOCS; an EPICS areaDetector driver will be included by default. It will make use of parallel readout to keep pace with the data rate, distributing the data over multiple nodes to create a single virtual dataset using the HDF5 file format for its speed advantages in high volumes of regular data. This paper presents the design of the control system software for the Percival detector, an update of the current state of the implementation carried out by Diamond Light Source.  
poster icon Poster MOPGF070 [0.359 MB]  
 
WEPGF015 Drivers and Software for MicroTCA.4 controls, hardware, interface, software 1
 
  • M. Killenberg, M. Heuer, M. Hierholzer, L.P. Petrosyan, Ch. Schmidt, N. Shehzad, G. Varghese, M. Viti
    DESY, Hamburg, Germany
  • T. Kozak, P. Prędki, J. Wychowaniak
    TUL-DMCS, Łódź, Poland
  • S. Marsching
    Aquenos GmbH, Baden-Baden, Germany
  • M. Mehle, T. Sušnik, K. Žagar
    Cosylab, Ljubljana, Slovenia
  • A. Piotrowski
    FastLogic Sp. z o.o., Łódź, Poland
 
  Funding: This work is supported by the Helmholtz Validation Fund HVF-0016 'MTCA.4 for Industry'.
The MicroTCA.4 crate standard provides a powerful electronic platform for digital and analogue signal processing. Besides excellent hardware modularity, it is the software reliability and flexibility as well as the easy integration into existing software infrastructures that will drive the widespread adoption of the new standard. The DESY MicroTCA.4 User Tool Kit (MTCA4U) comprises three main components: A Linux device driver, a C++ API for accessing the MicroTCA.4 devices and a control system interface layer. The main focus of the tool kit is flexibility to enable fast development. The universal, expandable PCI Express driver and a register mapping library allow out of the box operation of all MicroTCA.4 devices which are running firmware developed with the DESY board support package. The tool kit has recently been extended with features like command line tools and language bindings to Python and Matlab.
 
poster icon Poster WEPGF015 [0.536 MB]  
 
WEPGF036 Data Categorization And Storage Strategies At RHIC network, operation, real-time, collider 1
 
  • S. Binello, K.A. Brown, T. D'Ottavio, R.A. Katz, J.S. Laster, J. Morris, J. Piacentino
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
This past year the Controls group within the Collider Accelerator Department at Brookhaven National Laboratory replaced the Network Attached Storage (NAS) system that is used to store software and data critical to the operation of the accelerators. The NAS also serves as the initial repository for all logged data. This purchase was used as an opportunity to categorize the data we store, and review and evaluate our storage strategies. This was done in the context of an existing policy that places no explicit limits on the amount of data that users can log, no limits on the amount of time that the data is retained at its original resolution, and that requires all logged data be available in real-time. This paper will describe how the data was categorized, and the various storage strategies used for each category.
 
poster icon Poster WEPGF036 [0.337 MB]  
 
WEPGF062 Processing High-Bandwidth Bunch-by-Bunch Observation Data from the RF and Transverse Damper Systems of the LHC framework, diagnostics, software, controls 1
 
  • M. Ojeda Sandonís, P. Baudrenghien, A.C. Butterworth, J. Galindo, W. Höfle, T.E. Levens, J.C. Molendijk, D. Valuch
    CERN, Geneva, Switzerland
  • F. Vaga
    University of Pavia, Pavia, Italy
 
  The radiofrequency and transverse damper feedback systems of the Large Hadron Collider digitize beam phase and position measurements at the bunch repetition rate of 40 MHz. Embedded memory buffers allow a few milliseconds of full rate bunch-by-bunch data to be retrieved over the VME bus for diagnostic purposes, but experience during LHC Run I has shown that for beam studies much longer data records are desirable. A new "observation box" diagnostic system is being developed which parasitically captures data streamed directly out of the feedback hardware into a Linux server through an optical fiber link, and permits processing and buffering of full rate data for around one minute. The system will be connected to an LHC-wide trigger network for detection of beam instabilities, which allows efficient capture of signals from the onset of beam instability events. The data will be made available for analysis by client applications through interfaces which are exposed as standard equipment devices within CERN's controls framework. It is also foreseen to perform online Fourier analysis of transverse position data inside the observation box using GPUs with the aim of extracting betatron tune signals.  
poster icon Poster WEPGF062 [4.408 MB]  
 
WEPGF090 Design of EPICS IOC Based on RAIN1000Z1 ZYNQ Module EPICS, embedded, controls, experiment 1
 
  • T. Xue, G.H. Gong, H. Li, J.M. Li
    Tsinghua University, Beijing, People's Republic of China
 
  ZYNQ is the new architecture of FPGA with dual high performance ARM Cortex-A9 processors from Xilinx. A new module with Giga Bit Ethernet interface based on the ZYNQ XC7Z010 is development for the High Purity Germanium Detectors' data acquisition in the CJPL (China JingPing under-ground Lab) experiment, which is named as RAIN1000Z1. Base on the nice RAIN1000Z1 hardware platform, EPICS is porting on the ARM Cortex-A9 processor with embedded Linux and an Input Output Controller is implemented on the RAIN1000Z1 module. Due to the combination of processor and logic and new silicon technology of ZYNQ, embedded Linux with TCP/IP sockets and real time high throughput logic based on VHDL are running in a single chip with small module hardware size, lower power and higher performance. This paper will introduce how to porting the EPICS IOC application on the ZYNQ based on embedded Linux and give a demo of IO control and RS232 communication.  
poster icon Poster WEPGF090 [1.777 MB]  
 
WEPGF096 Managing a Real-time Embedded Linux Platform with Buildroot target, software, network, controls 1
 
  • J.S. Diamond, K.S. Martin
    Fermilab, Batavia, Illinois, USA
 
  Funding: This work was supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359
Developers of real-time embedded software often need to build the operating system kernel, tools and supporting applications from source to work with the differences in their hardware configuration. The first attempt to introduce Linux-based real-time embedded systems into the Fermilab accelerator controls system used this approach but it was found to be time-consuming, difficult to maintain and difficult to adapt to different hardware configurations. Buildroot is an open source build system with a menu-driven configuration tool (similar to the Linux kernel build system) that automates this process. A customized Buildroot system has been developed for use in the Fermilab accelerator controls system that includes several hardware configuration profiles (including Intel, ARM and PowerPC) and packages for Fermilab support software. A bootable image file is produced containing the Linux kernel, shell and supporting software suite that varies from 3 to 20 megabytes large ' ideal for network booting. The result is a platform that is easier to maintain and deploy in diverse hardware configurations.
 
poster icon Poster WEPGF096 [1.054 MB]  
 
WEPGF112 Flop: Customizing Yocto Project for MVMExxxx PowerPC and BeagleBone ARM network, software, controls, embedded 1
 
  • L. Pivetta, A.I. Bogani, R. Passuello
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  During the last fifteen years several PowerPC-based VME single board computers, belonging to the MVMExxxx family, have been used for the control system front-end computers at Elettra Sincrotrone Trieste. Moreover, a low cost embedded board has been recently adopted to fulfill the control requirements of distributed instrumentation. These facts lead to the necessity of managing several releases of the operating system, kernel and libraries, and finally to the decision of adopting a comprehensive unified approach based on a common codebase: the Yocto Project. Based on Yocto Project, a control system oriented GNU/Linux distribution called 'Flop' has been created. The complete management of the software chain, the ease of upgrading or downgrading complete systems, the centralized management and the platform-independent deployment of the user software are the main features of Flop.  
poster icon Poster WEPGF112 [1.249 MB]  
 
WEPGF124 Application Using Timing System of RAON Accelerator timing, controls, EPICS, FPGA 1
 
  • S. Lee, H. Jang, C.W. Son
    IBS, Daejeon, Republic of Korea
 
  Funding: This work is supported by the Rare Isotope Science Project funded by Ministry of Science, ICT and Future Planning(MSIP) and National Research Foundation(NRF) of Korea(Project No. 2011-0032011).
RAON is a particle accelerator to research the interaction between the nucleus forming a rare isotope as Korean heavy-ion accelerator. RAON accelerator consists of a number of facilities and equipments as a large-scaled experimental device operating under the distributed environment. For synchronization control between these experimental devices, timing system of the RAON uses the VME-based EVG/EVR system. In order to test the high-speed performance of the control logic with the minimized event signal delay, it is planned to establish the step motor controller testbed applying the FPGA chip. The testbed controller will be configured with Zynq 7000 series of Xilinx FPGA chip. Zynq as SoC (System on Chip) is divided into PS (Processing System) with PL (Programmable Logic). PS with the dual-core ARM cpu is performing the high-level control logic at run-time on linux operating system. PL with the low-level FPGA I/O signal interfaces with the step motor controller with the event signal received from timing system. This paper describes the content and performance evaluation obtained from the step motor control through the various synchronized event signal received from the timing system.
 
poster icon Poster WEPGF124 [1.690 MB]  
 
WEPGF129 CERN timing on PXI and cRIO platforms timing, hardware, software, controls 1
 
  • A. Rijllart, O.O. Andreassen, J. Blanco Alonso
    CERN, Geneva, Switzerland
 
  Given the time critical applications, the use of PXI and cRIO platforms in the accelerator complex at CERN, require the integration into the CERN timing system. In this paper the present state of integration of both PXI and cRIO platforms in the present General Machine Timing system and the White Rabbit Timing system, which is its successor, is described. PXI is used for LHC collimator control and for the new generation of control systems for the kicker magnets on all CERN accelerators. The cRIO platform is being introduced for transient recording on the CERN electricity distribution system and has potential for applications in other domains, because of its real-time OS, FPGA backbone and hot swap modules. The further development intended and what type of applications are most suitable for each platform, will be discussed.  
poster icon Poster WEPGF129 [1.548 MB]