Skip to content
SHOP4CF
  • ABOUT US
  • OPEN CALLS
    • OC1
    • OC2
    • OC3
  • MARKETPLACE
    • Components
    • Support Tools
  • USE-CASES
  • OUTCOMES
  • COMMUNITY
  • BLOG
Bosch Madrid: Elevating Human Potential

Bosch Madrid: Elevating Human Potential

Posted by By Marina Grandoso 26 October, 2023Posted inPrograma activo
Elevating Human Potential Elevating Human Potential Elevating Human Potential BOSCH MADRID BOSCH MADRID: Elevating Human Potential The manufacturing industry has always grappled with the challenge of balancing human involvement and…
Read More
Categories
  • Información
  • Programa activo
  • Sin categoría

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 873087. Neither the European Commission (EC) nor any person acting on behalf of the Commission is responsible for how the following information is used. The views expressed in this publication are the sole responsibility of the authors and do not necessarily reflect the views of the EC.

Twitter Facebook-f Linkedin-in Youtube




    ISDI will treat the information you provide solely to respond to your inquiries. The data will not be transferred to third parties except in cases where there is a legal obligation. You can exercise your rights of access, rectification, deletion and opposition to dpo@isdi.education

    CONTACT US

    PRIVACY POLICY

    COOKIES POLICY

    LEGAL NOTICE

    Scroll to Top
    This website uses cookies to improve your experience. By continuing to browse the site you are agreeing to our use of cookies although you can opt-out if you wish. Read more
    SETTINGSACCEPT
    Privacy & Cookies Policy

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
    SAVE & ACCEPT

    ROS2 Monitoring Tool​

    The ROS2 Monitoring component is meant for developers using ROS2: a dashboard for monitoring health, login, examining services, publishers and subscribers associated to ROS2 nodes. The component includes a GUI, which is used to interact with the component, through the GUI the user can setup the ROS components that the user wants to monitor.

    Read more

    Main non-functional requirements

    The component has no real-time responsiveness requirements

    Software requirements/dependencies

    Platform: Ubuntu, MacOS, Windows Requirements: ROS2, Docker

    Hardware requirements

    64-bit system capable of running: Ubuntu, MacOS, Windows and ROS2

    Security threats

    The component should operate behind a firewall during production

    Privacy threats

    No privacy threats have been identified

    Execution place

    Private Cloud/PC neart production

    Deployment instructions

    Deployment instructions can be found on a public repository

    User interface

    A dashboard showing the current status of all ROS2 nodes in the system

    Supported devices

    Desktop/Laptop, XX

    User defined scenarios (non-technical) and relevant pilot cases

    The component can be used to monitor a collection of ROS2 nodes

    M2O2P: Multi-Modal Online and Offline Programming solution

    The main functionality of this component is to enable robot control using natural human actions as input, in this case hand gestures using a gesture tracking glove. With sensor glove by CaptoGlove LLC, the operator makes distinguishable hand gestures to command and control the robot in the process. The component is reconfigurable for different controlling scenarios.

    Read more

    Main non-functional requirements

    N/A

    Software requirements/dependencies

    For the CaptoGlove, there should be Capto Suite installed. Docker installed on the host machine

    Hardware requirements

    20GB Hard drive space, recommended 2GB RAM

    Security threats

    None

    Privacy threats

    None

    Execution place

    CaptoGlove SDK is ran on host Windows PC and all the other parts of M2O2P are docker images

    Deployment instructions

    Instructions of application are provided in PDF format, later on video too.

    User interface

    User interface for the component is mainly in Web UI of the component. This can be reached from host machine navigating to localhost:54400 on browser.

    Supported devices

    Any Windows 10 machine, CaptoGlove

    User defined scenarios (non-technical) and relevant pilot cases

    Component can be used in any system where there is a need to send commands or finish tasks by human operator using the glove. In the Siemens pilot case, the component was used to complete tasks in a bin picking collaborative robotic application.

    VR-RM-MT: Virtual reality set for robot and machine monitoring and training

    The main functionality of this component is to enable the training and support of human workers in collaborative tasks. For doing so, the main activities of the collaborative task and the interaction of worker and robot is created in Virtual Reality (VR). By using a virtual reality headset and equipment the worker can remotely visualize, monitor, and perform the training of collaborative tasks with robots. It should be noted that based on the use case requirements (e.g., workspace and environment, equipment, safety aspects and interfaces to other components), several data inputs might be needed for creation of custom simulations. The component is divided into a sandbox mode (using pre-programmed actions) and a dynamic mode, which depending on configuration could receive data inputs from ROS nodes for on-the-fly creation of tasks

    Read more

    Main non-functional requirements

    N/A

    Software requirements/dependencies

    Windows 10 and compatible browser (Firefox, Chrome, etc)

    Hardware requirements

    A VR headset supported by A-Frame with controller positional tracking, as listed here: https://aframe.io/docs/1.2.0/introduction/vr-headsets-and-webvr-browsers.html

    Security threats

    None

    Privacy threats

    None

    Execution place

    Private cloud (meaning in pilot premises), cloud

    Deployment instructions

    Provide information on where deployment instructions for a ready component can be found (e.g. on public or private access repositories or on websites or only upon request, etc.)

    User interface

    A main configuration page and a sample workcell layout in VR mode.

    Supported devices

    Desktops, Laptops

    User defined scenarios (non-technical) and relevant pilot cases

    This component could be used to train workers in a collaborative assembly process by virtualizing the whole procedure in VR and allowing the worker to interact with the robot and components prior to working in the actual setup. It is important to keep in mind that since this application is meant for training, having a concrete step by step process is required to design and fully benefit the collaborative training

    DT-CP: Digital Twin – for planning and control

    The Digital Twin for control and planning (DT-CP) allows the users to create a virtual replica of the production line facilitated by the use case. The component is divided into two parts: A simulator, to experiment with alternative models to be implemented in the real line, and a monitoring dashboard for having an overview of the line.
    The time-based simulator takes as input several configuration parameters (e.g., takt time, shift time), process descriptions and resources (e.g., workers) with different skill set. Hence the user can modify and test different production strategies that would be more complicated and time consuming to test in the real line. The monitoring dashboard can provide the user with an overview on the status of production and provide notification mechanism for alerts when implemented. It should be noted that some functionalities (e.g., data sources for monitoring dashboard, data modelling, and configuration parameters for simulator) are dependent on the use case and might require further adaptations for proper integration in the setup.

    Read more

    Main non-functional requirements

    N/A

    Software requirements/dependencies

    N/A

    Hardware requirements

    Device capable of handling web-based applications

    Security threats

    None

    Privacy threats

    None

    Execution place

    Private cloud (meaning in pilot premises), cloud

    Deployment instructions

    Instructions will be provided on the Git page

    User interface

    The component will be divided into two parts: an online dashboard for monitoring the line in real time and a simulator, to experiment with alternative models to be implemented in the real line.

     

    The monitoring dashboard is intended to follow the flow of product from one workstation to another.

     

    The simulator allows to modify and test different production strategies that would be more complicated and time consuming to test in the real line. The simulator setup consists of four steps: the introduction of the initial process execution parameters, design of the layout, process description assignment and allocation of resources.

    Supported devices

    Desktop, Laptop

    User defined scenarios (non-technical) and relevant pilot cases

    Coupled with data collection applications, the monitoring part of the component could be used to have an overview on the process and provide notification and control mechanisms. Data inputs, notification handling, and control mechanisms may be to be further adapted as per use case.


    The simulator allows the user to define a time-based simulation of a process assembly by setting concrete configuration parameters (e.g., takt time, shift time) and assigning resources (e.g., workers) with additional resource parameters (e.g., skill set). Th result would be a report on the pre-provided production targets. Specific configuration parameters and settings may have to be further adapted as per use case.

    DCF: Data Collection Framework

    The Data Collection Framework (DCF) component collects data from shop floor (field devices, sensors, and controllers) and enterprise resource planning (ERP) systems using data adapters for different use cases. DCF can be used at production/assembly lines to have an overview on the collected data from sensors and workstations. The main function of the component is data collection from systems, data storage into databases if needed, and for engineers/supervisor to review and take appropriate action. It should be noted that some of the data adapters (e.g., ERP adapters) may need to be configured and tested on a use case basis.

    Read more

    Main non-functional requirements

    N/A

    Software requirements/dependencies

    Data Transfer and Communication within DCF and Database requires Python installed and some Libraries (opcua, paho.mqtt, pymongo, pandas, json, flask, requests, cx_Oracle, hbdcli)

    Hardware requirements

    – Windows 7 or 10
    – x86 64-bit CPU (Intel / AMD architecture)
    – 4 GB RAM
    -5 GB free disk space

    Security threats

    The component requires authentication from server/database before connecting and collecting ERP or shopfloor data and storing the data in database.

    Privacy threats

    None

    Execution place

    Both devices connected with local network and on different host address can be connected via MQTT and OPC-UA

    Deployment instructions

    DCF component will be deployed in docker and relevant instructions will be provided.

    User interface

    After specifying necessary connection configuration, the DCF module is monitoring the temperature and pressure reading through opc-ua server (left image). If the temperature or pressure is more than the allowed, it is logging the information (e.g. time, value, description) in the database (right image). These parameters can be changed according to the use case.

    Supported devices

    Desktop, Laptop

    User defined scenarios (non-technical) and relevant pilot cases

    DCF component can be used at production/assembly lines to collect data from workstation/sensors and apply event processing. For instance, if more time is being consumed to complete the task at specific workstation, this activity can be monitored, and relevant data can be logged in database for engineers/supervisor to view and take appropriate action

    ADIN: Adaptive Interfaces

    This component creates user interfaces depending on the information collected from the production line devices and the user’s profile. By doing so, relevant task and operation specific user interfaces are composed for the user. Such interfaces could for example display task specific work description (e.g, description of assembly operation) to users and enable the user to confirm completion of task for interaction with external components. It should be noted that if applicable, other interfaces with different functionalities may have to be developed based on the requirements

    Read more

    Main non-functional requirements

    N/A

    Software requirements/dependencies

    N/A

    Hardware requirements

    Device capable of use web-based application (e.g.: PC or laptop )

    Security threats

    None

    Privacy threats

    None

    Execution place

    Private cloud (meaning in pilot premises)

    Deployment instructions

    Instructions will be provided on the Git page

    User interface

    Supported devices

    Desktop, Laptop

    User defined scenarios (non-technical) and relevant pilot cases

    ADIN can be used by workers in an assembly line for assisting them on the task, giving them the specific and relevant information for fulfilling the duty.

     

    It also can be used in collaborative task with cobots where the worker receive instructions on the steps of the collaboration task.

    Shakeit: Workcell Process Optimization based on Reinforcement Learning

    Human-Centered Process Optimization based on RL in which its main functionality is to provide Reinforcement Learning logic wrapped in ROS2 packages

    Read more

    Main non-functional requirements

    Requirements for real-time responsiveness depends on the application. However, since reinforcement learning optimize next action and not the current, real-time responsiveness requirements are relaxed.

    Software requirements/dependencies

    Platform: Ubuntu, macOS, Windows.

    Requirements: ROS2, Docker

    Hardware requirements

    64-bit system capable of running: Ubuntu, macOS, Windows.


    High performance PC/Cloud (good CPU and GPU) for training models.


    Eg: +10 core count, +64 GB ram, RTX 2080ti for training (depending on the application and model).

    Security threats

    When deployed on-premise a firewall should be enough. If deployed in the cloud work is required to ensure a secure connection between the cloud and the production equipment/PC.

    Privacy threats

    No privacy threats have been identified.

    Execution place

    Private cloud/PC near robot.

    Deployment instructions

    Deployment instructions for the component can be found on a private access repository.

    User interface

    The component will have multiple user interfaces:
    A common user interface (dashboard) for developers and end-users containing data visualization, selected actions, and other diagnostics.
    Developers will furthermore have a GUI for yaml-file system configuration and all available ROS2 tools for visualization and diagnostics.

    Supported devices

    Desktop/Cloud

    User defined scenarios (non-technical) and relevant pilot cases

    The component can be used to optimize a work cell process with reinforcement learning. Example: optimize the process control of a vibration feeder, such that that an element always is available for a robot to pick up.

    FBAS-ML: Force-Based Assembly Strategies for Difficult Snap-Fit Parts Using Machine Learning

    The component is based on a generic add-on force-control for classical industrial and/or collaborative robots.
    An innovative force-sensor based strategy is used to fit two or more parts together that require a snap connection.
    The component is a ROS based control approach.

    Read more

    Main non-functional requirements

    – Low latency required

    – The trained assembly skills can be scaled in time and are primarily limited by the force control performance of the robot (F/T sensor)

    Software requirements/dependencies

    – ROS framework with ROS control (kinetic or melodic)

    – FZI Custom extension of ROS Cartesian Motion, Impedance and Force Controllers

    – FZI Custom wrappers for external robotic sensors – Robot ROS driver (e.g. ROS UR)

    – TensorFlow 2.1 with python 2.7

    Hardware requirements

    – Robot with wrist force-torque sensor mounted or integrated

    – Dedicated Pc (e.g. i7 shuttle PC with 8 GB ram)

    Security threats

    The component should run on a separate network without access to the public internet or to any other network not authorized to use it (ROS1 security)

    Privacy threats

    No specific privacy requirements, no personal information logging

    Execution place

    Local

    Deployment instructions

    Internal development, deployment instructions only upon (approved) request

    User interface

    Text configuration files

    Supported devices

    Any robot which supports ROS control and can measure end-effector forces and torques (intrinsic or integrated)

    User defined scenarios (non-technical) and relevant pilot cases

    Force-based assembly tasks which require difficult snap fitting of parts by a robot
    Pilot cases: Siemens use case 1

    DTS: Dynamic Task Scheduling for Efficient Human Robot Collaboration

    Task manager for safe and efficient human-robot interaction

    Read more

    Main non-functional requirements

    – Real time responsiveness is fundamental for the task scheduling to work safely and properly. The supervision of the robot pose requires at least 10 checks per second of the environment, for the robot to accurately react if there is any obstacle on its way
    – Sensor information (in particular depth information) needs to be as up-to-date as possible

    Software requirements/dependencies

    – ROS1 Framework
    – GPU-Voxels
    – FZI Custom extension of ROS Cartesian Motion, Impedance and Force Controllers
    – FZI Specific Extension of the FlexBE ROS package or FZI behaviour-Tree Implementation for Task Modelling and Scheduling
    – FZI Custom ROS wrappers for external robotic sensors
    – FZI Shared workspace (ROS application of GPU-Voxels) for human-robot-collaboration
    – FZI Robot Collision Detection ROS package
    – FZI Human Pose Prediction and Tracking software (optional)
    – Robot ROS Driver

    Hardware requirements

    – (Depth) Cameras with fast update rate for the images
    – Combination of several sensors (one is not enough)
    – 1 shuttle PC for robot control with real time optimization (low latency)
    – 1 additional PC with GPU for more computational intense tasks (i.e. collision avoidance, human detection)

    Security threats

    Run on a separate network (ROS1 security) without access to the public internet or to any network not authorized to use it

    Privacy threats

    No specific privacy requirements. No personal information, camera or 3D data logging

    Execution place

    Local

    Deployment instructions

    Internal development, deployment instructions only upon (approved) request

    User interface

    Text configuration files

    Supported devices

    Any robot with ROS driver, URDF description and real time joint angles

    User defined scenarios (non-technical) and relevant pilot cases

    Efficient Human-Robot collaboration on the shop floor, where the robot needs to fulfil tasks in the proximity of the worker
    Pilot Use case: Siemens Use Case 1

    HA-MRN: Human Aware Mobile Robot Navigation in Large Scale Dynamic Environments

    Safety and acceptability of mobile robots
    Read more

    Main non-functional requirements

    – Inputs expected at 10 Hz. Outputs between 2 and 10 Hz
    – Lower frequencies will influence safety and acceptability severely

    Software requirements/dependencies

    – ROS1 framework
    – Google Cartographer ROS
    – Move_Base and/or Move_Base_Flex ROS packages
    – AGV ROS driver
    – External ROS sensor drivers (cameras, lasers)
    – Open Pose
    – Wheel Odometry (ROS Topic)

    Hardware requirements

    – SICK Lidar (for example, SICK)
    – Intel RealSense and/or 2D camera
    – Dedicated PC (Intel5, High End GPU)

    Security threats

    Operates inside of mobile robot or secured WIFI connection
    (no off premises connection required)

    Privacy threats

    No personal information logging

    Execution place

    Local

    Deployment instructions

    Present: Internal development available on approved request
    Future: Public access repositories

    User interface

    – Text configuration files

    – (optional) GUI

    Supported devices

    – Specific PC (to be embedded in a compatible AGV)

    – Any AGV with ROS driver

    User defined scenarios (non-technical) and relevant pilot cases

    Mobile Robot evolving in an industrial plant or public area with people
    Pilot use case: Bosch use cases 1 and 2

    FTPT: Flexible Task Programming Tool

    Graphical front end (GUI) to program new robotic applications by quickly creating new control sequences based on ROS tools.
    The tool helps to develop or change the collaborative robotic applications, gives monitoring feedback on the status of the process and could be used to model different tasks as well as the interaction between robot and human transparently.
    It is an alternative to SMACH and FlexBE using Behavior Trees.

    Read more

    Main non-functional requirements

    No real time responsiveness required

    Software requirements/dependencies

    ROS1 Framework

    Hardware requirements

    A PC

    Security threats

    Run on a separate network (ROS1 security) without access to the public internet or to any network not authorized to use it

    Privacy threats

    No specific privacy requirements, no personal information logging

    Execution place

    Local

    Deployment instructions

    Internal development, deployment instructions only upon (approved) request

    User interface

    HTML editor to control the functionalities of the task-programming tool

    Supported devices

    GUI: Any device that allows mouse-like controls

    User defined scenarios (non-technical) and relevant pilot cases

    Any scenario which involves programming of robots
    Pilot use cases: Siemens use cases 1 and 2, Bosch use case 1

    ASA: Automated Safety Approval

    This component is used to determine whether the chosen robot trajectory & speed is safe and the required separation distance has been chosen adequately and can be covered by the sensor configuration. (This uses a calculation of the size of the required separation distances for robots that use the operating mode speed and separation monitoring.)

    Read more

    Main non-functional requirements

    Trajectories should be checked as early as possible to minimize the delay of execution. Ideally, precomputed trajectories are validated in advance.

    Software requirements/dependencies

    Currently Visual Components together with the IFF Safety Planning Tool are required for setting up the cell layout. In the future, other tools for this process might be available.
    To activate all features and use optimized calculations, a valid license can be purchased from Fraunhofer IFF.
    The component runs as a Linux Docker Container on Linux and Windows hosts.

    Hardware requirements

    PC, no special performance features

    Security threats

    no known issues

    Privacy threats

    no known issues (no cameras, no collection or processing of personal data)

    Execution place

    PC next to robot cell or server. Other options possible (e.g. Private cloud).

    Deployment instructions

    Deployment instructions will be available on the Shop4CF Docker Registry (docker.ramp.eu)

    User interface

    No user interface available.
    The REST API is documented using Swagger.

    Supported devices

    PC, Docker

    User defined scenarios (non-technical) and relevant pilot cases

    When operating a robot cell that uses speed and separation monitoring for safety purposes, you have to check if a given trajecory is safe. If the trajectories are fixed or worst case trajectories can be defined, the operator can check them during design phase (e.g. using the IFF Safety Planning Tool). If trajectories can change (e.g. when using dynamic motion planing), the ASA component allows you to check if a trajectory is safe and the separation distance can be monitored by the sensor configuration. If a trajectory is not safe, the user can calculate a different one or reduce the robot’s speed until all safety conditions are met.
    The ASA component ist utilized in the Siemens Use Case 1, where collision-free trajectories are calculated at run-time.

    RA: Review of Risk Analysis

    The review of the risk analysis supports a safety expert in identifying hazards and estimating risk. The responsible human designer is guided through the formalized process of identifying new hazards based on identified or manually captured system changes (e.g. part changes including geometry and payload; robot changes including speed, reach, tooling; environmental changes including new tables, fencing, etc.). Application highlights where the existing risk estimation requires updates.

    Read more

    Main non-functional requirements

    Bidirectional network access from RA server to FIWARE server (if not used as standalone tool)
    Port forwarding, firewall configuration
    Secret injection via files or environment variables

    Software requirements/dependencies

    Container runtime, e.g. Docker

    Hardware requirements

    Server: about 50 MB free RAM / < 1GB disk space.

    End-user device: min. 1280×720, ideally 1920×1080 screen, modern web-browser, about 500MB free RAM

    Security threats

    Production application must use HTTPS reverse proxy. Secrets must be generated and injected securely. Network access should ideally be limited. Additional requirements apply depending on threat model. E.g. when using FIWARE integration and local user management with enabled self-registration, the server must be located inside secure network (on the same permissions level as FIWARE server) due to granted lateral access to FIWARE server (alternatively use authentication via identity server for managing trust).

    Privacy threats

    Process critical data could be part of the data sent between client and server and this could put partners’ data at risk.

    Execution place

    Gateway, private cloud (meaning in pilot premises), cloud, etc.

    Unrestricted

    Deployment instructions

    Deployment instructions are a part of the overall documentation and are included in container and available as separate PDF.

    User interface

    Browser based interface with multiple tabs, available in German and English (with possibility to add additional languages). Documentation (included in container and available as PDF), video see below.

    Supported devices

    The server host is unrestricted. The end-user device should preferably be laptop or PC (due to size and amount of information on the screen).

    User defined scenarios (non-technical) and relevant pilot cases

    See ”Main functions”. Additionally, in the FIWARE configuration monitoring mode the RA component will track the resource assignment to the Process on FIWARE server and either automatically communicate previous approval of the configuration or facilitate review by the safety expert.

    FLINT

    The aim of the FLINT platform is to facilitate the incorporation of current/future wireless IoT devices (sensors/actuators) in a factory or shopfloor setting, as well as the required local wireless IoT communication infrastructure to connect such devices (e.g. LoRa gateways, BLE gateways). This component requires horizontal integration. At the left side, it will make use of adapters to interact with wireless IoT devices and long-range wireless communication equipment. At the right, after performing the required data transformations, it will either represent the IoT device as a LwM2M compliant device that can interface in a standardized way with a LwM2M back-end platform (for instance the open source Leshan platform) or deliver the data in a suitable format to a broker (e.g. FIWARE context broker).

    Read more

    Main non-functional requirements

    N/A

    Software requirements/dependencies

    – Dependency on data formats used by IoT devices / used wireless IoT infrastructure, which requires the 1-time design of suitable input/processing adapters. Similar dependency for output adapters in case no processing to LwM2M.
    – Docker: adapters realized as Docker containers. Implementation of adapters can be done in any language.
    – MQTT broker: for the information exchange between adapters
    – LwM2M processing adapters: dependency on Anjay, a C client implementation of LwM2M

    Hardware requirements

    Server/cloud platform supporting deployment/management of Docker containers.

    Security threats

    Currently, the internal communication between the adapters and MQTT broker is not secured. However most deployments are done on a secure company network, so the security risk should be limited.

    Privacy threats

    Privacy threats will depend on the type of data that is collected by IoT devices)

    Execution place

    Private Cloud

    Deployment instructions

    Deployment instructions can be found on https://github.com/imec-idlab/flint. Customization will be needed depending on the IoT devices/infrastructure to be used/deployed.

    User interface

    Dashboard for monitoring data received from / sent to IoT devices (see screenshot below). However, the user interface is not the core of the component, as it can operate without any UI.

    Supported devices

    The aim of the platform is to be extensible to support a wide range of wireless IoT devices and technologies.

    User defined scenarios (non-technical) and relevant pilot cases

    Industrial monitoring, asset tracking, environmental monitoring, etc.

    OpenWIFI - Open-source implementation of 802.11 WIFI on FPGA

    Supporting human workers on the shop floor by giving them real-time wireless control over aspects such as process management, interactions with robots, collecting sensor data.

    Read more

    Main non-functional requirements

    N/A

    Software requirements/dependencies

    Linux OS, GNU toolchain, Xilinix Toolchain

    Hardware requirements

    SDR board (e.g. Xilinx ZC706 + FMCOMMS2/3/4 or other compliant board see https://github.com/open-sdr/openwifi )

    Security threats

    WPA2 encryption is available and should be sufficient. Of course, a network firewall is necessary.

    Privacy threats

    All data transmitted over the same WiFi network can be seen by all connected clients. So SSL encryption might be necessary.

    Execution place

    Private cloud (meaning in pilot premises)

    Deployment instructions

    All information and source code is available on https://github.com/open-sdr/openwifi

    User interface

    – Developer: interact with openWIFI through linux WiFi driver (e.g. ath9k), and interface to openwifi specific components with a command line program (“sdrctl”)
    – User: openWiFi acts as a regular WiFi access point

    Supported devices

    All 802.11 WiFi enabled devices are supported (smartphones, tablet, laptops, embedded WiFi hardware, WiFi sensors, …)

    User defined scenarios (non-technical) and relevant pilot cases

    Wi-POS Indoor Localization

    The Wi-POS system is able to accurately determine the position of AGVs, robots or equipment on the SHOP floor. Positioning workers is also possible, but might be difficult for privacy reasons.

    Its goal is to enhance the safety of humans and support human workers on the shop floor ( Relieving repetitive and hard tasks such as moving equipment).

    Read more

    Main non-functional requirements

    N/A

    Software requirements/dependencies

    Standalone software (full-stack) is deployed on anchor nodes and mobile tags.

    Hardware requirements

    Dedicated embedded hardware is needed.

    Security threats

    No encryption on wireless sensor network, so positions could be retrieved. The server that collects the hardware should be protected by a network firewall.

    Privacy threats

    If the position of humans is logged, then privacy concerns might arise.

    Execution place

    Private wireless network is setup by the Wi-POS system (on-site)

    Deployment instructions

    Deployment instructions can be obtained on request. The instructions will vary depending on the location and the use case. System should be plug-and-play.

    User interface

    Not available. Measured coordinates are only pushed to FIWARE context broker.

    Supported devices

    Only dedicated hardware (proprietary) is supported for now. Other UWB enabled hardware (e.g. new iPhone) might be supported in the future.

    User defined scenarios (non-technical) and relevant pilot cases

    Determining the position of AGVs on the SHOP floor to allow navigation through the factory.

    Locating important equipment on the SHOP floor.

    Defining safe zones around robots to avoid human injuries.

    Automated inventory management.

    Code will not be publicly available.

    [elementor-template id="14644"]

    PMADAI - Predictive Maintenance and Anomaly Detection in Automotive Industry

    Supporting human workers in predicting or preventing potential failures and incidents; supporting human workers in planning services and repairs.

    Read more

    Main non-functional requirements

    (non) functional requirements, especially with respect to real-time responsiveness

    Software requirements/dependencies

    Linux, Windows, MacOS

    Hardware requirements

    Our app consists of several components (docker images). To use it in a comfortable style, we suggest at least 16RAM, and 60gb of disk space (in order to store OracleDB, InfluxDB, Kafka, Orion). OracleDB spaces is increasing in time in estimate way +/- 0,045MB per one operation (unstable value).

    Security threats

    Due to Volkswagen security politics, we exchange data between microservices with JWT token. All users, who wants to use our app, should be logged in via LDAP server. In develop mode we can use test user. All address and ports are protected. App is running in internal network without access to external network but it is not required.

    Privacy threats

    LDAP authentication is required.

    Execution place

    Everywhere when docker images can be hosted – there is no limitation.

    Deployment instructions

    Instructions can be found in our bitbucket repository with whole project.

    Provided upon request.

    User interface

    The application has a graphical user interface for viewing current waveforms and checking potential anomalous waveforms. The user can view, sort and filter the results. He can also report his own anomaly event if he deems it necessary. The application’s capabilities are limited only to viewing waveforms and metadata resulting from current waveforms. Data is obtained from the backend using REST API and websockets.

    Supported devices

    Laptop, PC

    User defined scenarios (non-technical) and relevant pilot cases

    Currently two potential use-scenarios for this component have been identified:

    1. Prediction of failures of a car body lift used in production process. Identification of repair time – which should result in reducing unnecessary interventions by human workers and, at the same time, in preventing future failures.

    2. Prediction of repair and maintenance (e.g., cleaning) interventions in parts of the paintshop. Detection of dependencies between observed changes in measurements and quality of paint structure. Again the purpose of this scenario is to reduce unnecessary interventions by human workers and, at the same time, to prevent failures.

    Available upon request

    VQC - Visual Quality Check

    Supporting human workers at production lines by monitoring quality.

    Read more

    Main non-functional requirements

    Input requires template image and test sample.

    Software requirements/dependencies

    OpenCV, Python, Flask. The component can be run with Docker image, then no additional installations are required.

    Hardware requirements

    A standard PC have enough computing power for this component, as no machine learning strategy was used.

    Security threats

    The components’ API should be accessible only in local or private network.

    Privacy threats

    None

    Execution place

    PC at the operator stand (in pilot), however this can be also run on the remote server.

    Deployment instructions

    FIWARE must be already up and running. To deploy the component, you can run the docker image directly or go to project folder and run `docker-compose up -d`.

    User interface

    There is no user interface – other components should call API function.

    Supported devices

    In Bosch Pilot AR-CVI was used as GUI.

    User defined scenarios (non-technical) and relevant pilot cases

    Dedicated application for laptop or desktop PC.

    Digital Twin for Intralogistics

    Supporting human workers on production line.

    Read more

    Main non-functional requirements

    AR (augmented reality) application precision of measured distances an location of simulated objects – we expect that AR application accuracy will be between 1cm-5cm per 5m

    Software requirements/dependencies

    Simulation module based on “LogABS” for Windows

    AR application – native app for Android or iOS

    Hardware requirements

    Intel i7 CPU with 16GB RAM or better for LogABS

    Mobile device compatible with AR Core or AR Kit for AR application

    Security threats

    No specific security requirements.

    Privacy threats

    No specific privacy requirements.

    Execution place

    PC and mobile device.

    Deployment instructions

    Instructions can be found in the code repository with whole project.

    Otherwise provided upon request.

    User interface

    Windows GUI for planning and executing logistic simulation, AR app for planning factory locations and equipment.

    Supported devices

    Laptop, PC, mobile device compatible with AR Core or AR Kit

    User defined scenarios (non-technical) and relevant pilot cases

    Planning new, safe AGV routes for redesign factory outline e. g. new product is about to enter into production and factory needs to be redesigned to maintain new storage spaces, new machinery etc.

    Internal repo, available upon request

    AR Manual Editor

    Assistance and training for operators during customised product assembly process, and maintenance operations including recognition of objects, sequence of operations and AR guidance to operators.

    Read more

    Main non-functional requirements

    – Mobile devices should be compatible with ARCore/ARKit frameworks.

    – Wi-Fi connection is required at the shop floor.

    – Minimum brightness levels are required for the AR vision algorithms to work correctly.

    – If workers are required to wear gloves, specific mobile device models will be required.

    Software requirements/dependencies

    WebXR, Django, ARCore, Arkit, Microsoft Mixed Reality toolkit

    Hardware requirements

    Mobile devices/HoloLens. A server

    Security threats

    The component need an account management. There is one already implemented but in case to be integrated on RAMP or another platform, further adjustments should be needed.

    Privacy threats

    None

    Execution place

    Private cloud provided by TECNALIA with access to pilots. In case there is a need for pilot sites to deploy the component there, it can be done using dockers.

    Deployment instructions

    In RAMP marketplace and upon request.

    User interface

    – Interface for the developer: an editor to create AR content in an easy way to guide operators in the shop floor.

    – Interface for the Operator (customised by the editor): the AR guidance to be visualised with a mobile devices/HoloLens through different steps and different objects (2D/3D objects, images, video, documents, animations, etc.).

    Supported devices

    Laptop + mobile devices/HoloLens

    User defined scenarios (non-technical) and relevant pilot cases

    AR guidance to operator during the assembly of the base plate in collaboration with a robot in the Siemens Use case.

    – The shop floor manager/developers add a new manual to the system using the editor, adding all the multimedia assets (3D files, pdf, videos, photos, etc.) and defining, step by step, the entire manual.

    – The shop floor manager, once the manual has been added, defines through the editor what type of trigger will activate the augmented reality display. In this case, through the Context Broker we will see in a task for HoloLens is raised to show the different steps of the manual.

    – Then he publishes from the editor that manual, so that it can be consumed by HoloLens, defining the type of users and roles that will have permission to do so.

    – From that moment any worker of the plant with permissions working on the assembly of a base plate will be able to trigger with his device the visualization in augmented reality of that manual based on the status of the collaboration with the robot supporting him/her.

    Not available. As stated in the CA, the tool is not open source so no code will be provided.

    AR-based Teleassistance

    Supporting human workers on the shop floor: Tele-assistance in maintenance for long distance workers

    Read more

    Main non-functional requirements

    An Internet connection (Wi-Fi connection recommended).

    Android app: ARCore framework is needed to use the AR functionalities.

    Browser client: right now, the best/recommended browser to use it is Firefox.

    Depending on the resolution of the real time streaming, the bandwidth has to be in accordance with it, with higher resolution is recommended to use Wi-Fi-connection.

    Software requirements/dependencies

    Server side: NodeJS, Websockets, Express

     

    Client side: ARCore

    Hardware requirements

    Server side: UNIX/Linux environment, with enough bandwidth for the number of users to use it

    Client side: smartphone device compatible with ARCore; browser can´t be Chrome (Firefox recommended)

    Security threats

    The component need an account management. There is one already implemented but in case to be integrated on RAMP or another platform, further adjustments should be needed. Server side must use an SSL certificate to send the communication data with https protocol.

    Privacy threats

    Right now, the authentication part only requires having the client application to log in, so it is necessary not to share them with unknown users.

    Execution place

    Private cloud provided by TECNALIA with access to pilots. In case there is a need for pilot sites to deploy the component there, it can be done using dockers.

    Deployment instructions

    In RAMP marketplace and upon request.

    User interface

    Android app client

    Supported devices

    Laptop + mobile deviLaptop + mobile devices/HoloLens/HoloLens

    User defined scenarios (non-technical) and relevant pilot cases

    UC2 in Arcelik for equipment maintenance.

    A worker that needs support with any type of physical component or machine, call to an expert colleague in the field. In order to do that, the worker opens the application installed in his smartphone and calls the previously connected expert. The application connects with the expert sharing the back camera of the worker and the frontal camera of the expert. The worker scans the component area or the machine area, and the expert draws through the mobile screen creating indications, as a drawing mode to the worker, giving an augmented reality support. When the support is finished, both exit the application.

    Not available. As stated in the CA, the tool is not open source so no code will be provided.

    VR Creator

    It will allow workers to be trained in the operation of a machine, or manufacturing line for example, through the use of virtual reality. With this web tool it will be possible to create and consume immersive VR experiences (with glasses) oriented to training.

    Read more

    Main non-functional requirements

    If video 360 are big files web creator tool will require a PC/Laptop with dedicated (and “good”) graphic card. We will specify more.

     

    Wifi connection is required during training experience creation and consumption.

     

    PC/Laptop and VR HMD should contain a WEBXR compatible browser.

    Software requirements/dependencies

    WebXR, Django, WebGL

    Hardware requirements

    PC/laptop and VR HMDs

     

    A Server

    Security threats

    The component need an account management. There is one already implemented but in case to be integrated on RAMP or another platform, further adjustments should be needed.

    Privacy threats

    None.

    Execution place

    Private cloud provided by TECNALIA with access to pilots. In case there is a need for pilot sites to deploy the component there, it can be done using dockers.

    Deployment instructions

    In RAMP marketplace and upon request.

    User interface

    N/A

    Supported devices

    PC/Laptop + VR HMDs

    User defined scenarios (non-technical) and relevant pilot cases

    Training the operators when using new machines. At this moment we do not have any pilot interested in the component.

    Not available. As stated in the CA, the tool is not open source so no code will be provided.

    MPMS - Manufacturing Process Management System

    MPMS includes the functionality to design processes and describe agents, and execute in automated way the processes by assigning activities to agents. It provides orchestration of activities in a global level, i.e., covering all work cells/production lines of a factory.

    Read more

    Main non-functional requirements

    As a logical functional component, MPMS shall be able to:

     

    Automatically execute a sequence of activities

     

    Monitor agents’ availability

     

    Monitor agent’s performance including at least task estimated completion time and task actual completion time

     

    Monitor process current state

     

    Provide right information to agents to perform a task

     

    Handle exceptions on agent, task and process level by halting/resuming their activities and initiating out-of-normal action processes

     

    (re-)allocate appropriate agents to perform a task based on abilities, skills, authorizations, cumulative workload, overall manufacturing system status and availability

     

    Re-allocate agents in response to external events such as safety alerts or sensor failures.

     

    As a software technical component, MPMS shall be able to:

     

    Provide a modeler application to model processes

     

    Provide a process engine to automatically enact process models

     

    Provide tasklist applications to deliver tasks to human operators

     

    Support integration to custom UIs as tasklist applications

    Provide integration to local components to deliver tasks to robotic agents

    Support various platform environments

     

    Support various DBMS

     

    Be deployed both on premise/cloud

     

    Provide security/authorisation mechanisms

     

    Integrate to middleware/context broker and other components

     

    Support web services

     

    Support REST/JAVA APIs

     

    Support SOA/Interoperability (NF)

     

    Be robust (NF) be runtime scalable (NF)

     

    Be easy to use by both process modelers, developers and end users (e.g., human operators) (NF)

    Software requirements/dependencies

    MPMS is built on Camunda Platform 7.15.0, Community Edition and runs in every Java-runnable environment. It can support the following environments:

     

    Container/Application Server for runtime components

     

    Apache Tomcat 7.0 / 8.0 / 9.0

     

    JBoss EAP 6.4 / 7.0 / 7.1 / 7.2

     

    Wildfly Application Server 10.1 / 11.0 / 12.0 / 13.0 / 14.0 / 15.0 / 16.0 / 17.0 / 18.0

     

    Databases MySQL 5.6 / 5.7

     

    MariaDB 10.0 / 10.2 / 10.3 Oracle 11g / 12c / 18c / 19c

     

    PostgreSQL 9.4 / 9.6 / 10.4 / 10.7 / 11.1 / 11.2 postgres:14-alpine (Docker image)

     

    Microsoft SQL Server 2012/2014/2016/2017

     

    H2 1.4

     

    Adminer:4.8.1 (UI for DB management) (Docker image)

     

    Web Browser

    Google Chrome latest

     

    Mozilla Firefox latest

     

    Internet Explorer 11

     

    Microsoft Edge

     

    Java

    Java 8 / 9 / 10 / 11 / 12 / 13 (if supported by your application server/container)

     

    Java Runtime

    Oracle JDK 8 / 9 / 10 / 11 / 12 / 13 IBM JDK 8 (with J9 JVM) OpenJDK 8 / 9 / 10 / 11 / 12 / 13

    Openjdk:11.0.13-jre-slim (Docker image)

     

    Camunda Modeler

    Windows 7 / 10

     

    Mac OS X 10.11

     

    Ubuntu LTS (latest)

     

    The Camunda Community Platform is provided under various open source licenses (mainly Apache License 2.0 and MIT). Third-party libraries or application servers included are distributed under their respective licenses. Detailed info on licences is provided in T7.2.

    Hardware requirements

    For deploying MPMS in a local desktop PC, not any special requirements are needed. A powerful processor, a lot of RAM memory and a decent graphics card is sufficient. The following specs should do:

     

    Processor: Intel Core i7-7700 @ 3.60GHz / Intel Core i7-6700K @ 4.00GHz / Intel Core i7-7700K @ 4.20GHz / Intel Core i7-8700K @ 3.70GHz

     

    Storage: SATA 2.5 SSD (e.g. 256 GB) RAM: 32GB DDR4-2133 DIMM (2x16GB) (well, even 16GB will not be a problem)

     

    Graphics Card: Any modern standard graphics card

     

    Monitor, keyboard and mouse are essential. Touchscreen for operators might be handy.

     

    A laptop could also work:

    Processor: Intel Core i7-7700HQ @ 2.80GHz / Intel Core i7-6770HQ @ 2.60GHz.

    Security threats

    MPMS connects to DBs with password access.

     

    Users of web applications have access with password.

    Privacy threats

    Agents (and specifically human operators) should be described with due diligence wrt privacy data.

    Execution place

    MPMS shall be deployed on local PCs on premises. It can also be deployed on a cloud server (e.g., on TUE premises) but extra security is required.

    Deployment instructions

    Deployment instructions and user manuals can be provided upon request.

    Can also be uploaded on RAMP if there’s a specific repository.

    User interface

    Three different types of users:

     

    Process modelers

    They use the Modeler application to model manufacturing processes (with BPMN 2.0)

     

    Application developers

    They turn the process models designed by the process modelers into executable process models (i.e., the ones that the Process Engine can interpret and enact).

    Also, they can build custom applications (e.g. tasklists, cockpits, smartwatch apps)

     

    (Human) Managers and operators (end-users)

     

    Managers use the Cockpit/Dashboard and Admin applications (default or custom) to see the processes state and manage the users of MPMS

     

    Operators use the Tasklist applications (default or custom) to receive tasks and provide input (e.g. task completion confirmation)

     

    For each type of user, there are manuals and webinars to provide instructions.

    Supported devices

    Modeler runs on PC/Laptop (see SW/HW requirements above)

     

    Process Engine runs on PC/Laptop (see SW/HW requirements above)

     

    Web applications run on PC/Laptop/tablet/smartphones (additionally, a prototype tasklist application has been built for smartwatch)

    User defined scenarios (non-technical) and relevant pilot cases

    MPMS can be used in any pilot/open call for:

     

    process modelling (for bottlenecks identification and enabling automated execution),

     

    dynamic agent allocation,

     

    process orchestration,

     

    automated process execution,

     

    integration to other IS (e.g., ERP) for getting the right information and providing it to agents during execution,

     

    process status monitoring,

     

    task monitoring for job safety and quality of human operators,

     

    re-allocation of agents when job safety and quality criteria are violated,

     

    etc.

    AR-CVI - AR for Collaborative Visual Inspection/AR for Task Instructions

    The component provides visual support in manual assembly and inspection tasks. Human worker is guided by the visualized instructions while performing the associated tasks. The component can project the instructions on a surface or a screen. This provides a clean and structured working environment.

    Read more

    Main non-functional requirements

    The Fiware messages are checked at 2 Hz.

    Software requirements/dependencies

    Ubuntu:

    Ubuntu 18.04 or later (for running on Ubuntu)

     

    Docker version 20.10.6 (Previous versions later than v19 are also supported)

     

    Windows:

    Windows 10

     

    WSL 2

     

    Docker version 20.10.6 (Previous versions later than v19 are also supported)

     

    Windows X Server (for running on Windows)

    Hardware requirements

    A screen with at least 1920×1080 (HD) resolution support

     

    A projector [Optional] with at least 1920×1080 (HD) resolution. Front surface mirrors might be necessary for projecting down to a table. High brightness according to the illumination of the environment (~4400 Lumens)

     

    A PC or laptop (Preferably a 4-core CPU with 8 GB RAM, 15 GB HDD space)

    Security threats

    No specific security threats

    Privacy threats

    No specific privacy threats.

    Execution place

    The component should be executed in the assembly cell pc (local deployment). Necessary files can be mounted to the Docker container from the host machine or from a remote machine.

    Deployment instructions

    Currently in the public Github repository: https://github.com/emecercelik/ar-cvi/ Later the instructions will be available in RAMP.

    User interface

    The instructions are displayed in a full-screen mode.
    User can provide inputs to the component using the buttons on the screen if defined with the display messages (templates).
    A screen view is provided below:
    Instruction images, PCB quality check outputs (provided by another component), a written instruction, and buttons (at the bottom).
    For a developer demonstration refer to the following video:
    https://syncandshare.lrz.de/getlink/fiC2jAGB9vBsmaRqtVVmzEg4/ar-cvi_demonstration.mp4

    Supported devices

    PC, Laptop, Projector, Screen

    User defined scenarios (non-technical) and relevant pilot cases

    Support on the assembly cell for the human worker by displaying manuals of an assembly or inspection task. The operator can provide inputs to start the inspection (provided by another component).

    WoT-IL - Interoperability Layer through Web of Things

    It supports the process management improving the interoperability of the system. It addresses the ability of the system to be standard interoperable.

    Read more

    Main non-functional requirements

    In the case of OPC UA communication, the server must be configured and deployed. The Node ID (s) must be provided to be able to read the information in real time.

    Software requirements/dependencies

    Node JS and/or Java but it can be developer also with other programming languages.

     

    Docker and Docker compose

     

    Orion LD Context Broker

    Hardware requirements

    16 Gb RAM, 10 Gb HD

     

    Linux – Ubuntu

    Security threats

    It wraps REST APIs in another descriptor, the component does not manage the security of the APIs described. The server that serves the descriptor will be secured through HTTPS and certificate.

    Privacy threats

    No specific privacy threats.

    Execution place

    The component should be executed in a local PC on the shopfloor

    Deployment instructions

    We provide a docker container with a configuration file.

    User interface

    In this version a Command Line Interface has been implemented. It is expected to develop a graphical interface for the next version.

    Supported devices

    PC/Laptop

    User defined scenarios (non-technical) and relevant pilot cases

    A pilot or a SHOP4CF component owner wants to provide to the external word some functionalities he wants to test with external users (for instance for open calls), he builds a WoT (Web of Things) interface with our component that wraps its component to be used by third party developers.

    Safety planning tool

    The main functionality of this supporting tool is to allow robotics applications designers and system integrations to also analyse the required size of the minimum separation distance for cobot applications featuring Speed and Separation Monitoring. The minimum separation distance is determined by a combination of the following parameters: 

    • Robot type, including braking parameters 
    • Robot program (speeds for individual joints for planned movements) 
    • Payload (including tooling and manipulated parts) 
    • Safety sensor attributes include reaction time, distance that a person can reach into workspace before being detected, and measurement uncertainty 

    The supporting tool integrates user interfaces for specifying these values into the overall simulation environment and performs the calculations to show the instantaneous and accumulated minimum safety distances, in 2D (projected on the floor) and in 3D (around the robot). 

    Read more
    Main non-functional requirements
    N/A
    Software requirements/dependencies
    Visual components v4.4
    Hardware requirements
    See Visual components hardware requirements
    Security threats
    None
    Privacy threats
    None
    Execution place
    Local computer
    Deployment instructions
    Instructions are provided in PDF format.
    User interface
    The user interface is integrated into the Visual components interface.
    Supported devices
    Computer workstation.
    User defined scenarios (non-technical) and relevant pilot cases
    In the Design Phase, the user can determine the size of the required minimum separation distances for a specific application. This is used to design the overall application, choose robot and safety sensors, and determine safety-related aspects such as required floor space in the factory.

    Toolkit for planning safety of collaborative robotics applications

    The main functionality of this supporting tool is to help designers of collaborative robotics applications ensure the safety of their applications. It provides examples of best practices, how-to guides, information related to legal issues, and information related to relevant standards and regulations.

    Read more
    Main non-functional requirements
    N/A
    Software requirements/dependencies
    An internet browser
    Hardware requirements
    A computer or mobile device with an internet browser and a pdf viewer.
    Security threats
    None
    Privacy threats
    None
    Execution place
    Local
    Deployment instructions
    No instructions necessary
    User interface
    A website
    Supported devices
    Computer and mobile devices
    User defined scenarios (non-technical) and relevant pilot cases
    This tool is intended for use in the Design Phase for any pilots that require safety evaluation.

    Protocols for safety validation

    The main functionality of this supporting tool is to help system integrators and designers of collaborative robot applications validate the application at a system level in the form of a test, to prove that the system is safe. This support is in the form of a step-by-step guide (a pdf document), specifying which relevant information the user needs, which measuring equipment to use, and how to execute and evaluate the test results. There are a wide variety of different protocols available, as these are specific to robot device type and safeguarding strategy.

    Read more
    Main non-functional requirements
    N/A
    Software requirements/dependencies
    An internet browser
    Hardware requirements
    A computer or mobile device with an internet browser and a pdf viewer.
    Security threats
    None
    Privacy threats
    None
    Execution place
    Local
    Deployment instructions
    No instructions are necessary (the protocols themselves are like instructions).
    User interface
    A website
    Supported devices
    Computer and mobile devices
    User defined scenarios (non-technical) and relevant pilot cases
    This tool is intended for use in the Design Phase for any pilots that require safety evaluation.

    Design thinking methodology supporting tool

    The main functionality of this supporting tool is to to facilitate the incorporation of innovation and creative problem-solving methodologies into industrial processes. The tool combines common visual resource management tools (such as Kanban) with collaborative creation methodologies such as design thinking. In this way, the iterative refine process incorporated by these collaborative techniques is expected to increase daily production problems of workers, incorporating, in a natural way (without extra work), skills, and tools to improve. The expected results systematically introduce this iterative process for the solution of daily production problems, in a way that workers naturally understand the specific needs of users of services / products and redefine problems in an attempt to identify alternative solutions more productive, more attractive and more efficient, without a high investment in resources.

    Read more
    Main non-functional requirements
    N/A
    Software requirements/dependencies
    Internet Browser
    Hardware requirements
    A computer or mobile device with an internet browser
    Security threats
    None
    Privacy threats
    None
    Execution place
    Online
    Deployment instructions
    No instructions are necessary
    User interface
    Website
    Supported devices
    Computer and mobile devices
    User defined scenarios (non-technical) and relevant pilot cases
    This tool is intended for use in the Design Phase for any pilots that require systematically solve production problems or improve their processes

    Methodology to model end-to-end manufacturing processes with advanced dynamic and human-centric task allocation (MPDesign)

    The main functionality of this supporting tool is to provide insight into the composition of the flexible manufacturing process in terms of the availability of resources and correctness of the inputs and outputs flow. 

    This is achieved by facilitating the step-by-step gathering of information related to manufacturing tasks and potential actors involved in the production process. The outcome may be used to assign workers to tasks fitting their qualifications, allocate robotic agents to tasks that may harm humans, and adequately plan upskilling of the personnel. MPDesign is also an excellent means to document the manufacturing process’s design for future reference. 

    Read more
    Main non-functional requirements
    N/A
    Software requirements/dependencies
    Microsoft Access license or freely available Microsoft access runtime environment
    Hardware requirements
    PC with Windows operating system and standard hardware parameters
    Security threats
    None
    Privacy threats
    None from MPDesign implementation point of view. The privacy breach may occur in case MPDesign is used contrary to the GDPR rules.
    Execution place
    Local PC
    Deployment instructions
    Deployment and usage instructions are provided in PDF format. Usage instructions include an exemplary scenario to illustrate the usage of MPDesign. The tool itself is equipped with help buttons, explanation boxes and information elucidated from an exemplary scenario described in the user manual.
    User interface
    Stand-alone app (executable Microsoft Access application)
    Supported devices
    PC
    User defined scenarios (non-technical) and relevant pilot cases
    MPDesign is intended for use in the Design Phase for any scenario that allows various actors to be assigned to a task or actors to be shifted between tasks. MPDesign is not intended for the scenarios where each task has fixed assignment of dedicated actor.

    Human-Machine interaction modelling and validation

    The main functionality of this supporting tool is to provide a set of recommendations (based on interaction patterns and design guidelines) that designers and developers should implement in their interfaces to ensure usability, accessibility, and UX aspects. This tool would be applicable to designers/developers at an early stage of product development as well as evaluating the interface of an already developed solution.  

    The users would have to answer a questionnaire covering relevant aspects of the human-machine interactions, such as the type of devices to be used (e.g., tablet, PC), the type of technology (e.g., Augmented Reality, Web), and the target user profile.

    Read more
    Main non-functional requirements
    N/A
    Software requirements/dependencies
    Web browser
    Hardware requirements
    N/A
    Security threats
    None
    Privacy threats
    None
    Execution place
    Local / Online
    Deployment instructions
    No instruction is necessary. The tool itself has a certain easy-to-follow workflow.
    User interface
    Website
    Supported devices
    Computer and mobile devices
    User defined scenarios (non-technical) and relevant pilot cases
    This tool would be applicable to designers/developers in the UI design phase of their product development as well as evaluating the interface of an already developed solution.

    Subjective user experience and acceptance assessment tool

    The main functionality of this supporting tool is to give solution providers or developers a possibility to collect feedback from users of a technical solution with a short, ready-made questionnaire. The questionnaire is based on a design and evaluation framework of the SHOP4CF project and it addresses seven themes: user experience, usability, user acceptance, usefulness, ergonomics, safety, and ethics. For solution providers, the tool provides an easy way to collect holistic information of their solutions, and for workers, it provides an effortless way to participate in the design of work tools.  

    The use of the questionnaire can be started by creating a project, adding an expiration date to the ready-made questionnaire, and sending the questionnaire link to the respondents. After four respondents have answered the questionnaire, it is possible to see the results (e.g., an overview of the human factors topics and detailed information for each question). The results are automatically updated when the questionnaire receives more responses.

    Read more
    Main non-functional requirements
    The tool is currently available only as a demo version.
    Software requirements/dependencies
    An internet browser
    Hardware requirements
    N/A
    Security threats
    None
    Privacy threats
    No privacy threats as the tool is currently provided only as a demo version. Even though the tool does not collect personal data, such as respondents’ names or contact information, it is possible that some information related to respondents is revealed e.g. through free responses. To avoid this, VTT can be contacted if there is a need to use the tool for collecting real feedback from workers, and in that case, VTT can handle data management and check that no information possibly leading to identification of anyone is conveyed to the feedback collector.
    Execution place
    Online.
    Deployment instructions
    No instructions needed.
    User interface
    Website.
    Supported devices
    Computer and mobile devices.
    User defined scenarios (non-technical) and relevant pilot cases
    This tool is intended for use in the Design Phase for any pilots that need an easy way to collect user feedback on their solution. The solution can be used only once or several times when developing a solution, and it fits best to the phase when the solution has been tried out by test users (either short-term or long-term trials).

    Human Performance Assessment Tool (PATH)

    The main functionality of this supporting tool is to supports the rapid assessment of various factors affecting human performance and part quality in industrial operations. In particular, the assessment reviews factors such as time deviations (faster or slower executions), assembly sequence modifications or errors (forgotten parts, incorrect operations, etc.). Beyond the rapid assessment of human performance factors, the tool proposes the potential areas of improvement. A SSH (social sciences and humanities) professional is required for full assessment before engaging further actions.   

    PATH is a web-based application that can potentially run from telephones, tablets or PCs. It enables workers and managers to provide feedback about the tasks they perform, the working conditions or their attitudes. Privacy aspects have been included into the tool: the questionnaire is anonymous and only statistical results are provided. 

    Read more
    Main non-functional requirements
    The tool is proposed as a demo version on the SHOP4CF web page. The results shall be assessed by a professional SSH.
    Software requirements/dependencies
    Backend: Server with Linux, Python, Django (Docker)
    Frontend: An internet browser
    Hardware requirements
    N/A
    Security threats
    None
    Privacy threats
    No privacy threats when the tool is used as a demo version. The tool does not collect personal data. However, it is not impossible to use some information related to respondents to identify an user. JVERNE can be contacted if there is a need to use the tool for collecting real feedback from workers, handle data management and check that no information possibly leading to identification of anyone is conveyed to the feedback collector.
    Execution place
    Online.
    Deployment instructions
    No instructions needed.
    User interface
    Webpage.
    Supported devices
    Computer, tablet or smartphone devices.
    User defined scenarios (non-technical) and relevant pilot cases
    This tool can be used anytime in a production environment in order to identify potential areas where workers and factories can reduce the risk of errors during production. Assessment by a professional SSH is required to achieve proper understanding and solution proposal.