Automation-focused accelerator aims to unlock the creative potential of Europe’s workforce with...
SHOP4CF is now looking for tandems of system integrators and manufacturing companies...
SHOP4CF stands for Smart Human Oriented Platform for Connected Factories and is...
1. WEBSITE OWNERSHIP
INTERNETSIA, S.L. (hereinafter, ISDI Accelerator), with legal address at c/ Viriato 20 – Bajo; 28010 de Madrid (España), tax number B85663359 and register within the Registro Mercantil de Madrid, Tomo 26.599, Folio 158 Sección 8, Hoja M-479385 is the holder of this web page (hereinafter, the Website). You may contact us through the following contact details:
Telf. (0034) 91 737 39 25Email: email@example.com
The domain name through which you have accessed to this Website is held by ISDI Accelerator. This Website will not be used in connection with other contents, products and/or services which are not owned by ISDI Accelerator and/or its affiliates and/or branch offices.
This Legal Notice contains all the terms and conditions that regulate:
a) the access, navigation, and use of the Website;b) the responsibilities arising from the use of the Website and/or the use of the services which may be offered through the Website;c) the provision and use of Website content.
Notwithstanding anything herein, the foregoing is without prejudice to the fact that ISDI Accelerator may establish specific case-by-case conditions which regulate the use, provision and/or contracting of products or services which are offered to Users through the Website. In any case, those specific conditions will form an integral part of this Legal Notice.
Performance by the User of any single act among the following will constitute acceptance without reservation of each and every one of the rules found in this Legal Notice and will be taken as consideration on the part of the User:
a) accessing the Website;
b) filling out forms through the Website;
c) sending requests for information or complaints;
d) accepting contractual offers or subscriptions;
e) in general, all acts of a similar nature to those carried out when filling out forms and/or when contacting via email addresses published on the Website.
You must therefore read and understand the content of this Legal Notice. Should the use, provision and/or contracting of products or services be offered through the Website, the mere fact of being used and/or requested by the User will constitute, equally, the User´s acceptance without reservation of the corresponding established specific conditions which will also form an integral part of this Legal Notice.
3. WEBSITE USE AND ACCESS
Access to the Website by the Users is free. However, the use, provision and/or contracting of the products and services which may be offered by ISDI Accelerator may be subject to the previous acceptance of formal requisites such as the filling out of corresponding forms, payment of fees and costs and/or the previous acceptance of specific conditions which apply to the same.
The Website is designed for use by an adult general audience (+18) and is not intended for use by children. Children under 18 years old are not allowed to access the Website and/or services. Merely accessing the Website does not imply the establishment of any link or commercial relationship between ISDI Accelerator and the User, except where the appropriate means have been established and the User has previously complied with the requisites which are established.
Information on the Website relating to products or services offered by ISDI Accelerator is solely for information and advertising purposes unless otherwise stated.
If for the use, provision and/or contracting of a product or service offered through the Website, the User is obliged to register, he/she will be under an obligation to provide accurate information, guaranteeing the authenticity of all the data provided at the time of filling out the pre-established forms required to access the corresponding products or services. If, as a result of the User’s registration, a password is issued, the User thereby is bound to use it diligently and to keep such password secret. Consequently, Users will be responsible for the adequate custody and confidentiality of all identifying data and/or passwords which are given to them by ISDI Accelerator, and are bound to not allow or facilitate their use by third parties, be it temporarily or permanently, nor to provide access to others. The use and/or contracting of products or services by illegitimate third parties acquired due to a fault or negligent use or misuse of a password given to a third party and/or the loss of the password by the User will be entirely the responsibility of the User.
Furthermore, it is the User’s duty to immediately notify ISDI Accelerator of any circumstances which may lead to the improper use of identifying data and/or passwords, such as theft, loss or non-authorized access, so that ISDI Accelerator can proceed with prompt cancellation. Without limitation to any other provision hereof, for the duration of any such period during which any such circumstances are not communicated to ISDI Accelerator, ISDI Accelerator will be exempt from any responsibility which could derive from the improper use of the identifying data or use or misuse of passwords by third parties.
In all cases, the access, navigation and use of the Website, and the use or contracting of the services or products offered through the Website, is the sole and exclusive responsibility of the User. The User is therefore bound to diligently and faithfully observe any additional instructions given by ISDI Accelerator or by ISDI Accelerator’s authorized employees in relation to the Website’s use and its contents.
The User is therefore bound to use the contents, products, and services in a diligent, correct and lawful manner, complying with current legislation and, in particular, agrees to abstain from:
(i) Using any of the same in any manner which is against the law or that offends reasonable standards of general public morality, ethics or public order, or which is in any way contrary to the instructions of ISDI Accelerator.
(ii) Using any of the same in a way which harms the legitimate rights of third parties.
(iii) Accessing and/or using the Website for professional or business purposes or incorporating the services and contents of the Website as its own business activities.
(iv) Using contents and products and, in particular, information of any nature which is obtained through the Website or the services, for advertising purposes or any form of communication which has direct sales purposes or with any other commercial aim, or for non-solicited messages aimed at a group of people, independent of their finality, as well as abstaining from commercializing or circulating in any way any such information.
4. DISCLAIMER OF WARRANTIES
The Website, including without limitation, all services, features, content, functions, and materials provided through the Website, are provided “as is”, without warranty of any kind, either express or implied. The Website may contain information, opinions, advice, warnings, and statements provided by different information and content sources as well as any user of the Website for which the company assumes no responsibility whatsoever for the accuracy or reliability thereof nor does it endorse any such information, opinions, advice, warnings, and statements. The company shall have no responsibility for users´ decisions based on the information provided by or through the Website and users should seek professional advice where appropriate, regarding the evaluation of any specific information, opinion, advice, warning or other content including, but not limited to legal, financial, health, counselling or lifestyle content. Any information posted on the Website is intended for general purposes only. The company does not represent or endorse the accuracy or reliability of any such information or contents. Consequently, the company does not warrant the timeliness, reliability, use or veracity of the information, sequence, accuracy or completeness of such information nor the results obtained from the given use of such information and shall have no liability to the user including in the event of defamatory, offensive or illicit materials, content or information.
The company makes no representation or warranty related to the accuracy, reliability, completeness or timeliness of the content, services, products, text, graphics, links, or other items contained within the Website, or the results obtained from accessing and using the Website and/or the content contained herein.
In particular, the company is not responsible for and does not warrant:
(i) the continuity of the Website’s contents and/or the availability or accessibility of the Website or its technical continuity.
(ii) that contents or products are error-free or that defects will be corrected.
(iii) the absence of viruses and/or other harmful elements in the Website or server which hosts it.
(iv) the invulnerability of the Website and/or the impregnability of the security measures adopted by the same.
(v) the usability or performance of the Website’s contents or services.
(vii) any other damages of any nature which may be caused by reasons pertaining to the Website not functioning or to the defective functioning of the Website or any other Website or with regard to any links which fail.
ISDI Accelerator applies reasonable measures to avoid errors in the content published in the Website. The content offered through the Website is updated periodically and ISDI Accelerator reserves the right to modify it at any time. ISDI Accelerator will not be held responsible for the consequences which may derive from any errors in any contents and/or services provided by third parties on the Website.
Any communication or transmission of contents to the Website which infringes the rights of third parties and/or the content of which is threatening, obscene, defamatory, pornographic, xenophobic, which undermines personal dignity or the rights of minors or which is contrary to current legislation, or any conduct of the user which incites or constitutes a criminal offence, is totally prohibited.
5. LIMITATION OF LIABILITY AND INDEMNIFICATION
To the maximum extent permitted by applicable law, in no event, including but not limited to negligence, shall the company or any of our affiliates, branches or any of our directors, officers, employees, agents or content or service providers be liable for any direct, indirect, special, incidental, consequential, exemplary or punitive damages arising from, or directly or indirectly related to the use of, or the inability to use, the Website or the contents, features, materials and functions related thereto. The total liability of the company, affiliates, branches, directors, officers, employees, agents or content or service providers to users for all damages, losses and causes of action whether in contract or tort (including but not limited to negligence or otherwise) arising from the use of the Website shall be limited to and not exceed the amount, if any, paid by the user to the company for use of the Website or purchase of products or services through the Website. Users waive the right they might otherwise have to trial by jury and to class and collective actions.
The User agrees to hold ISDI Accelerator and any of its affiliates, branches, officers, directors, employees and agents harmless from any and all claims, liabilities, costs and expenses, including attorneys ́ fees arising in any way from the use of the Website, the placement or transmission of any message, content, information, software or other materials through the Website or for violation of the law or these terms and conditions contained in this Legal Notice.
6. CANCELLATION OF ACCESS AND USE
ISDI Accelerator may, at its sole discretion, deny, withdraw, suspend and/or block at any time and without prior notice, access to the Website to those users who fail to comply with this Legal Notice, being able to delete their registration and all information and files relating to the same.
The company shall not assume any liability to any user for the cancellation of access to the Website for the cause stated in this paragraph.
7.INTELLECTUAL PROPERTY RIGHTS
ISDI Accelerator is the owner and/or the rights holder and/or has obtained a corresponding license of the intellectual property rights and/or image rights, where necessary and/or subsisting, pertaining to the contents available through the Website. The term “contents” as used anywhere herein, extends but is not limited to the texts, graphic designs, drawings, codes, software, photographs, videos, sounds, indices, images, brands, logos, expressions, information and, in general, any other creation which is protected by national regulations and international treaties on intellectual property.
All intellectual property rights in and to all contents are reserved and, in particular, it is forbidden to modify, copy, reproduce, publicly communicate, transform or distribute in any way the totality or part of any contents included in the Website for public or commercial means unless with the prior, express and written authorization of ISDI Accelerator or, as the case may be, from the third party owner or rights holder of the same. Among others, the use of any technology to extract and collect information and contents from the Website is forbidden.
Access to and navigation through the Website will in no case be understood as a relinquishment, transmission, license or total or partial transfer of any rights by ISDI Accelerator howsoever. Consequently, it is not permitted to delete, evade or manipulate any indicators of rights ownership (for example “copyright”, “©”, “trademark” or “™” indicators) or other identifying data, whether in favor of ISDI Accelerator or any other parties, and/or any technical protection mechanisms, fingerprints or whichever information or identification mechanisms may be contained in, or otherwise pertaining to, any contents.
Any references to names and commercial or registered brands, logos or other distinctive marks, which are owned by ISDI Accelerator or by others, implicitly forbid their use without the authorization from ISDI Accelerator or from the owner or the rights holder. At no time, unless otherwise expressly stated, shall access or use of the Website and/or its contents, give the User any right whatsoever to the brands, logos and/or distinctive signs included in the Website, each of which is protected by Law.
8.1 Links from the Website to other websites
ISDI Accelerator may offer direct or indirect links to other Internet websites which are outside of the Website. The presence of these links in the Website has a purely informative purpose only and at no time constitute an invitation to contract the products and/or services offered on such websites. Furthermore, no such link implies the existence of a commercial link or relationship with the person or entity owning the Website to which the link is offered. In any such case, ISDI Accelerator will not be responsible for establishing general conditions to be taken into account in the use, provision or contracting of or for any such services or products and, as such, ISDI Accelerator may not be held responsible in any way in relation to any such products or services in any manner howsoever.
ISDI Accelerator does not have the knowledge, human resources or technical means to control or approve the information, contents, products or services provided by or through other websites to which it offers a link from the Website. Consequently, the company will not take any responsibility for any matters relating to such third-party Websites linked with the Website. Specifically, without limitation, the company will not be responsible in any way whatsoever for the functioning, access, data, information, files, quality, products and services, links and/or content of any such Websites.
Notwithstanding the above, where ISDI Accelerator becomes aware that the activity or the information which it links to is illegal and will lead to a crime or damage the rights or property of third parties, it will act promptly with diligence to delete or cease from using the corresponding link.
Likewise, if Users become aware of the illegality of the activities carried out through any such third-party Websites, they will be under the obligation to communicate such matter to ISDI Accelerator at the earliest reasonable opportunity such that ISDI Accelerator may evaluate the same and act appropriately.
8.2 Links from other websites to the Website
If any User, entity or webpage wishes to establish a link to the Website of any nature, they must comply with the following conditions:
(i) They will need to obtain the prior, express and written authorization from ISDI Accelerator.
(ii) The link will only be made to the Website’s homepage unless otherwise stated or authorized.
(iii) The link will need to be absolute and complete, i.e. it must lead the User through a click to the main page and must include the whole of that page. In no case, unless otherwise authorized by ISDI Accelerator, will the webpage from which the link is made to be able to:
– reproduce in any way on the Website,
– include the Website as part as its own Website or as any frames from such Website
– be able to create a browser on any of the Website pages.
(iv) On the Website from which the link is established, unless with ISDI Accelerator’s express prior written approval, no declaration of any nature may be made to the effect that ISDI Accelerator has authorized the link. If ISDI Accelerator providing the link from its webpage to the Website wishes to include on its own webpage any brand, denomination, commercial name, label, logo or any other sign which identifiesISDI Accelerator and/or the Website, they must obtain the previous, express and written authorization from ISDI Accelerator.
(v) ISDI Accelerator forbids the link to the Website from all those websites which contain materials, information or contents which are illegal, degrading, obscene and in general, which infringe upon morality, public order, current legislation, generally accepted social rules or which harm the legitimate rights of third parties.
When it is required that the User registers or provides personal data (in order to access services, subscribe to newsletters, carry out any registration process, request information, acquire products, make consultations or complaints or to solicit any contractual transaction, among others), the User will be alerted as to the need to provide his/her personal data.
10. DURATION AND MODIFICATION
ISDI Accelerator reserves the right to modify any of the terms and conditions of this Legal Notice without prior notification and the particular terms and conditions which may have been established for the use and/or contracting of the products and services provided through the Website, whenever it deems it appropriate due to business reasons and/or in order to adapt and comply with any changes in legislation and in technology which has become effective since the last publication of the same on the Website.
The term of this Legal Notice coincides with the duration of its publication and exhibition in the Website, until such time as it is totally or partially modified. At such a moment, the modified terms & conditions will become binding.
ISDI Accelerator may, at any time, finalize, cancel or interrupt access to the published content. In any such case, the User will have no right to claim compensation of any kind. Following any such cancellation, the prohibitions that are set out above in this Legal Notice regarding the use of contents will remain valid.
For any communication between ISDI Accelerator and the User, the User must contact ISDI Accelerator through the postal and/or email address provided on the Website. Communications from ISDI Accelerator to the User must comply with the contact information provided by the User. The User therefore expressly accepts the use of the email address provided as a valid means for the exchange of information between ISDI Accelerator and the User.
The headings of the different sections herein only have an informative nature and do not affect, qualify or modify the interpretation of this Legal Notice. Where there is any discrepancy between the effects of this Legal Notice and the particular terms & conditions which may be established in relation to any specific products or services offered on the Website, the latter will prevail. Where any one of the provisions set forth in this Legal Notice could be considered as not being totally or partially binding by a Court of Law or by a recognized regulatory body, such nullity will not affect the other provisions contained in this Legal Notice nor any other provisions which have been established. Where ISDI Accelerator does not exercise any of the rights contained in this Legal Notice, such event will not constitute a relinquishment of this right, unless expressly stated in writing.
13. GOVERNING LAW AND JURISDICTION
This Legal Notice and any relationship arising out of its acceptance or related hereto shall be governed exclusively by the laws of Spain.
The competent courts to resolve any controversy that arises from or is related to this Legal Notice and/or any relationship arising from its acceptance will be determined according to the applicable law.
© 2020 INTERNETSIA, S.L. All rights reserved.
The ROS2 Monitoring component is meant for developers using ROS2: a dashboard for monitoring health, login, examining services, publishers and subscribers associated to ROS2 nodes. The component includes a GUI, which is used to interact with the component, through the GUI the user can setup the ROS components that the user wants to monitor.
Main non-functional requirements
The component has no real-time responsiveness requirements
Platform: Ubuntu, MacOS, Windows Requirements: ROS2, Docker
64-bit system capable of running: Ubuntu, MacOS, Windows and ROS2
The component should operate behind a firewall during production
No privacy threats have been identified
Private Cloud/PC neart production
Deployment instructions can be found on a public repository
A dashboard showing the current status of all ROS2 nodes in the system
User defined scenarios (non-technical) and relevant pilot cases
The component can be used to monitor a collection of ROS2 nodes
The main functionality of this component is to enable robot control using natural human actions as input, in this case hand gestures using a gesture tracking glove. With sensor glove by CaptoGlove LLC, the operator makes distinguishable hand gestures to command and control the robot in the process. The component is reconfigurable for different controlling scenarios.
For the CaptoGlove, there should be Capto Suite installed. Docker installed on the host machine
20GB Hard drive space, recommended 2GB RAM
CaptoGlove SDK is ran on host Windows PC and all the other parts of M2O2P are docker images
Instructions of application are provided in PDF format, later on video too.
User interface for the component is mainly in Web UI of the component. This can be reached from host machine navigating to localhost:54400 on browser.
Any Windows 10 machine, CaptoGlove
Component can be used in any system where there is a need to send commands or finish tasks by human operator using the glove. In the Siemens pilot case, the component was used to complete tasks in a bin picking collaborative robotic application.
The main functionality of this component is to enable the training and support of human workers in collaborative tasks. For doing so, the main activities of the collaborative task and the interaction of worker and robot is created in Virtual Reality (VR). By using a virtual reality headset and equipment the worker can remotely visualize, monitor, and perform the training of collaborative tasks with robots. It should be noted that based on the use case requirements (e.g., workspace and environment, equipment, safety aspects and interfaces to other components), several data inputs might be needed for creation of custom simulations. The component is divided into a sandbox mode (using pre-programmed actions) and a dynamic mode, which depending on configuration could receive data inputs from ROS nodes for on-the-fly creation of tasks
Windows 10 and compatible browser (Firefox, Chrome, etc)
A VR headset supported by A-Frame with controller positional tracking, as listed here: https://aframe.io/docs/1.2.0/introduction/vr-headsets-and-webvr-browsers.html
Private cloud (meaning in pilot premises), cloud
Provide information on where deployment instructions for a ready component can be found (e.g. on public or private access repositories or on websites or only upon request, etc.)
A main configuration page and a sample workcell layout in VR mode.
This component could be used to train workers in a collaborative assembly process by virtualizing the whole procedure in VR and allowing the worker to interact with the robot and components prior to working in the actual setup. It is important to keep in mind that since this application is meant for training, having a concrete step by step process is required to design and fully benefit the collaborative training
The Digital Twin for control and planning (DT-CP) allows the users to create a virtual replica of the production line facilitated by the use case. The component is divided into two parts: A simulator, to experiment with alternative models to be implemented in the real line, and a monitoring dashboard for having an overview of the line.The time-based simulator takes as input several configuration parameters (e.g., takt time, shift time), process descriptions and resources (e.g., workers) with different skill set. Hence the user can modify and test different production strategies that would be more complicated and time consuming to test in the real line. The monitoring dashboard can provide the user with an overview on the status of production and provide notification mechanism for alerts when implemented. It should be noted that some functionalities (e.g., data sources for monitoring dashboard, data modelling, and configuration parameters for simulator) are dependent on the use case and might require further adaptations for proper integration in the setup.
Device capable of handling web-based applications
Instructions will be provided on the Git page
The component will be divided into two parts: an online dashboard for monitoring the line in real time and a simulator, to experiment with alternative models to be implemented in the real line.
The monitoring dashboard is intended to follow the flow of product from one workstation to another.
The simulator allows to modify and test different production strategies that would be more complicated and time consuming to test in the real line. The simulator setup consists of four steps: the introduction of the initial process execution parameters, design of the layout, process description assignment and allocation of resources.
Coupled with data collection applications, the monitoring part of the component could be used to have an overview on the process and provide notification and control mechanisms. Data inputs, notification handling, and control mechanisms may be to be further adapted as per use case.
The simulator allows the user to define a time-based simulation of a process assembly by setting concrete configuration parameters (e.g., takt time, shift time) and assigning resources (e.g., workers) with additional resource parameters (e.g., skill set). Th result would be a report on the pre-provided production targets. Specific configuration parameters and settings may have to be further adapted as per use case.
The Data Collection Framework (DCF) component collects data from shop floor (field devices, sensors, and controllers) and enterprise resource planning (ERP) systems using data adapters for different use cases. DCF can be used at production/assembly lines to have an overview on the collected data from sensors and workstations. The main function of the component is data collection from systems, data storage into databases if needed, and for engineers/supervisor to review and take appropriate action. It should be noted that some of the data adapters (e.g., ERP adapters) may need to be configured and tested on a use case basis.
Data Transfer and Communication within DCF and Database requires Python installed and some Libraries (opcua, paho.mqtt, pymongo, pandas, json, flask, requests, cx_Oracle, hbdcli)
– Windows 7 or 10– x86 64-bit CPU (Intel / AMD architecture)– 4 GB RAM-5 GB free disk space
The component requires authentication from server/database before connecting and collecting ERP or shopfloor data and storing the data in database.
Both devices connected with local network and on different host address can be connected via MQTT and OPC-UA
DCF component will be deployed in docker and relevant instructions will be provided.
After specifying necessary connection configuration, the DCF module is monitoring the temperature and pressure reading through opc-ua server (left image). If the temperature or pressure is more than the allowed, it is logging the information (e.g. time, value, description) in the database (right image). These parameters can be changed according to the use case.
DCF component can be used at production/assembly lines to collect data from workstation/sensors and apply event processing. For instance, if more time is being consumed to complete the task at specific workstation, this activity can be monitored, and relevant data can be logged in database for engineers/supervisor to view and take appropriate action
This component creates user interfaces depending on the information collected from the production line devices and the user’s profile. By doing so, relevant task and operation specific user interfaces are composed for the user. Such interfaces could for example display task specific work description (e.g, description of assembly operation) to users and enable the user to confirm completion of task for interaction with external components. It should be noted that if applicable, other interfaces with different functionalities may have to be developed based on the requirements
Device capable of use web-based application (e.g.: PC or laptop )
Private cloud (meaning in pilot premises)
ADIN can be used by workers in an assembly line for assisting them on the task, giving them the specific and relevant information for fulfilling the duty.
It also can be used in collaborative task with cobots where the worker receive instructions on the steps of the collaboration task.
Human-Centered Process Optimization based on RL in which its main functionality is to provide Reinforcement Learning logic wrapped in ROS2 packages
Requirements for real-time responsiveness depends on the application. However, since reinforcement learning optimize next action and not the current, real-time responsiveness requirements are relaxed.
Platform: Ubuntu, macOS, Windows.
Requirements: ROS2, Docker
64-bit system capable of running: Ubuntu, macOS, Windows.
High performance PC/Cloud (good CPU and GPU) for training models.
Eg: +10 core count, +64 GB ram, RTX 2080ti for training (depending on the application and model).
When deployed on-premise a firewall should be enough. If deployed in the cloud work is required to ensure a secure connection between the cloud and the production equipment/PC.
No privacy threats have been identified.
Private cloud/PC near robot.
Deployment instructions for the component can be found on a private access repository.
The component will have multiple user interfaces:A common user interface (dashboard) for developers and end-users containing data visualization, selected actions, and other diagnostics.Developers will furthermore have a GUI for yaml-file system configuration and all available ROS2 tools for visualization and diagnostics.
The component can be used to optimize a work cell process with reinforcement learning. Example: optimize the process control of a vibration feeder, such that that an element always is available for a robot to pick up.
The component is based on a generic add-on force-control for classical industrial and/or collaborative robots.An innovative force-sensor based strategy is used to fit two or more parts together that require a snap connection.The component is a ROS based control approach.
– Low latency required
– The trained assembly skills can be scaled in time and are primarily limited by the force control performance of the robot (F/T sensor)
– ROS framework with ROS control (kinetic or melodic)
– FZI Custom extension of ROS Cartesian Motion, Impedance and Force Controllers
– FZI Custom wrappers for external robotic sensors – Robot ROS driver (e.g. ROS UR)
– TensorFlow 2.1 with python 2.7
– Robot with wrist force-torque sensor mounted or integrated
– Dedicated Pc (e.g. i7 shuttle PC with 8 GB ram)
The component should run on a separate network without access to the public internet or to any other network not authorized to use it (ROS1 security)
No specific privacy requirements, no personal information logging
Internal development, deployment instructions only upon (approved) request
Text configuration files
Any robot which supports ROS control and can measure end-effector forces and torques (intrinsic or integrated)
Force-based assembly tasks which require difficult snap fitting of parts by a robotPilot cases: Siemens use case 1
Task manager for safe and efficient human-robot interaction
– Real time responsiveness is fundamental for the task scheduling to work safely and properly. The supervision of the robot pose requires at least 10 checks per second of the environment, for the robot to accurately react if there is any obstacle on its way– Sensor information (in particular depth information) needs to be as up-to-date as possible
– ROS1 Framework– GPU-Voxels– FZI Custom extension of ROS Cartesian Motion, Impedance and Force Controllers– FZI Specific Extension of the FlexBE ROS package or FZI behaviour-Tree Implementation for Task Modelling and Scheduling– FZI Custom ROS wrappers for external robotic sensors– FZI Shared workspace (ROS application of GPU-Voxels) for human-robot-collaboration– FZI Robot Collision Detection ROS package– FZI Human Pose Prediction and Tracking software (optional)– Robot ROS Driver
– (Depth) Cameras with fast update rate for the images– Combination of several sensors (one is not enough)– 1 shuttle PC for robot control with real time optimization (low latency)– 1 additional PC with GPU for more computational intense tasks (i.e. collision avoidance, human detection)
Run on a separate network (ROS1 security) without access to the public internet or to any network not authorized to use it
No specific privacy requirements. No personal information, camera or 3D data logging
Any robot with ROS driver, URDF description and real time joint angles
Efficient Human-Robot collaboration on the shop floor, where the robot needs to fulfil tasks in the proximity of the workerPilot Use case: Siemens Use Case 1
– Inputs expected at 10 Hz. Outputs between 2 and 10 Hz– Lower frequencies will influence safety and acceptability severely
– ROS1 framework– Google Cartographer ROS– Move_Base and/or Move_Base_Flex ROS packages– AGV ROS driver– External ROS sensor drivers (cameras, lasers)– Open Pose– Wheel Odometry (ROS Topic)
– SICK Lidar (for example, SICK)– Intel RealSense and/or 2D camera– Dedicated PC (Intel5, High End GPU)
Operates inside of mobile robot or secured WIFI connection(no off premises connection required)
No personal information logging
Present: Internal development available on approved requestFuture: Public access repositories
– Text configuration files
– (optional) GUI
– Specific PC (to be embedded in a compatible AGV)
– Any AGV with ROS driver
Mobile Robot evolving in an industrial plant or public area with peoplePilot use case: Bosch use cases 1 and 2
Graphical front end (GUI) to program new robotic applications by quickly creating new control sequences based on ROS tools.The tool helps to develop or change the collaborative robotic applications, gives monitoring feedback on the status of the process and could be used to model different tasks as well as the interaction between robot and human transparently.It is an alternative to SMACH and FlexBE using Behavior Trees.
No real time responsiveness required
HTML editor to control the functionalities of the task-programming tool
GUI: Any device that allows mouse-like controls
Any scenario which involves programming of robotsPilot use cases: Siemens use cases 1 and 2, Bosch use case 1
This component is used to determine whether the chosen robot trajectory & speed is safe and the required separation distance has been chosen adequately and can be covered by the sensor configuration. (This uses a calculation of the size of the required separation distances for robots that use the operating mode speed and separation monitoring.)
Trajectories should be checked as early as possible to minimize the delay of execution. Ideally, precomputed trajectories are validated in advance.
Currently Visual Components together with the IFF Safety Planning Tool are required for setting up the cell layout. In the future, other tools for this process might be available.To activate all features and use optimized calculations, a valid license can be purchased from Fraunhofer IFF.The component runs as a Linux Docker Container on Linux and Windows hosts.
PC, no special performance features
no known issues
no known issues (no cameras, no collection or processing of personal data)
PC next to robot cell or server. Other options possible (e.g. Private cloud).
Deployment instructions will be available on the Shop4CF Docker Registry (docker.ramp.eu)
No user interface available.The REST API is documented using Swagger.
When operating a robot cell that uses speed and separation monitoring for safety purposes, you have to check if a given trajecory is safe. If the trajectories are fixed or worst case trajectories can be defined, the operator can check them during design phase (e.g. using the IFF Safety Planning Tool). If trajectories can change (e.g. when using dynamic motion planing), the ASA component allows you to check if a trajectory is safe and the separation distance can be monitored by the sensor configuration. If a trajectory is not safe, the user can calculate a different one or reduce the robot’s speed until all safety conditions are met.The ASA component ist utilized in the Siemens Use Case 1, where collision-free trajectories are calculated at run-time.
The review of the risk analysis supports a safety expert in identifying hazards and estimating risk. The responsible human designer is guided through the formalized process of identifying new hazards based on identified or manually captured system changes (e.g. part changes including geometry and payload; robot changes including speed, reach, tooling; environmental changes including new tables, fencing, etc.). Application highlights where the existing risk estimation requires updates.
Bidirectional network access from RA server to FIWARE server (if not used as standalone tool)Port forwarding, firewall configurationSecret injection via files or environment variables
Container runtime, e.g. Docker
Server: about 50 MB free RAM / < 1GB disk space.
End-user device: min. 1280×720, ideally 1920×1080 screen, modern web-browser, about 500MB free RAM
Production application must use HTTPS reverse proxy. Secrets must be generated and injected securely. Network access should ideally be limited. Additional requirements apply depending on threat model. E.g. when using FIWARE integration and local user management with enabled self-registration, the server must be located inside secure network (on the same permissions level as FIWARE server) due to granted lateral access to FIWARE server (alternatively use authentication via identity server for managing trust).
Process critical data could be part of the data sent between client and server and this could put partners’ data at risk.
Gateway, private cloud (meaning in pilot premises), cloud, etc.
Deployment instructions are a part of the overall documentation and are included in container and available as separate PDF.
Browser based interface with multiple tabs, available in German and English (with possibility to add additional languages). Documentation (included in container and available as PDF), video see below.
The server host is unrestricted. The end-user device should preferably be laptop or PC (due to size and amount of information on the screen).
See ”Main functions”. Additionally, in the FIWARE configuration monitoring mode the RA component will track the resource assignment to the Process on FIWARE server and either automatically communicate previous approval of the configuration or facilitate review by the safety expert.
The aim of the FLINT platform is to facilitate the incorporation of current/future wireless IoT devices (sensors/actuators) in a factory or shopfloor setting, as well as the required local wireless IoT communication infrastructure to connect such devices (e.g. LoRa gateways, BLE gateways). This component requires horizontal integration. At the left side, it will make use of adapters to interact with wireless IoT devices and long-range wireless communication equipment. At the right, after performing the required data transformations, it will either represent the IoT device as a LwM2M compliant device that can interface in a standardized way with a LwM2M back-end platform (for instance the open source Leshan platform) or deliver the data in a suitable format to a broker (e.g. FIWARE context broker).
– Dependency on data formats used by IoT devices / used wireless IoT infrastructure, which requires the 1-time design of suitable input/processing adapters. Similar dependency for output adapters in case no processing to LwM2M.– Docker: adapters realized as Docker containers. Implementation of adapters can be done in any language.– MQTT broker: for the information exchange between adapters– LwM2M processing adapters: dependency on Anjay, a C client implementation of LwM2M
Server/cloud platform supporting deployment/management of Docker containers.
Currently, the internal communication between the adapters and MQTT broker is not secured. However most deployments are done on a secure company network, so the security risk should be limited.
Privacy threats will depend on the type of data that is collected by IoT devices)
Deployment instructions can be found on https://github.com/imec-idlab/flint. Customization will be needed depending on the IoT devices/infrastructure to be used/deployed.
Dashboard for monitoring data received from / sent to IoT devices (see screenshot below). However, the user interface is not the core of the component, as it can operate without any UI.
The aim of the platform is to be extensible to support a wide range of wireless IoT devices and technologies.
Industrial monitoring, asset tracking, environmental monitoring, etc.
Supporting human workers on the shop floor by giving them real-time wireless control over aspects such as process management, interactions with robots, collecting sensor data.
Linux OS, GNU toolchain, Xilinix Toolchain
SDR board (e.g. Xilinx ZC706 + FMCOMMS2/3/4 or other compliant board see https://github.com/open-sdr/openwifi )
WPA2 encryption is available and should be sufficient. Of course, a network firewall is necessary.
All data transmitted over the same WiFi network can be seen by all connected clients. So SSL encryption might be necessary.
All information and source code is available on https://github.com/open-sdr/openwifi
– Developer: interact with openWIFI through linux WiFi driver (e.g. ath9k), and interface to openwifi specific components with a command line program (“sdrctl”)– User: openWiFi acts as a regular WiFi access point
All 802.11 WiFi enabled devices are supported (smartphones, tablet, laptops, embedded WiFi hardware, WiFi sensors, …)
The Wi-POS system is able to accurately determine the position of AGVs, robots or equipment on the SHOP floor. Positioning workers is also possible, but might be difficult for privacy reasons.
Its goal is to enhance the safety of humans and support human workers on the shop floor ( Relieving repetitive and hard tasks such as moving equipment).
Standalone software (full-stack) is deployed on anchor nodes and mobile tags.
Dedicated embedded hardware is needed.
No encryption on wireless sensor network, so positions could be retrieved. The server that collects the hardware should be protected by a network firewall.
If the position of humans is logged, then privacy concerns might arise.
Private wireless network is setup by the Wi-POS system (on-site)
Deployment instructions can be obtained on request. The instructions will vary depending on the location and the use case. System should be plug-and-play.
Not available. Measured coordinates are only pushed to FIWARE context broker.
Only dedicated hardware (proprietary) is supported for now. Other UWB enabled hardware (e.g. new iPhone) might be supported in the future.
Determining the position of AGVs on the SHOP floor to allow navigation through the factory.
Locating important equipment on the SHOP floor.
Defining safe zones around robots to avoid human injuries.
Automated inventory management.
Code will not be publicly available.
This component is a platform that groups adapters to connect different kinds of input to different kinds of output. For example, it could take input from a positioning engine (location of an AGV, in x,y coordinates), convert the data into the correct format and post it to a PubSub server (e.g. a FIWARE context broker). While the M3RCP component focuses more on adapters for wireless technologies, this component is more mature regarding FIWARE integration and provides already lots of adapters for home automation appliances (e.g. Philips Hue).
Client: Linux OS, JVM
Backend: environment able to deploy Docker images, e.g. Kubernetes is used internally
Server/cloud hardware, not specific.
Components are best placed behind the company firewall.
No specific privacy threats.
Gateway, on-premise cloud, private/public cloud
Deployment instructions can be obtained on request. The instructions will vary depending on the location and the use case.
Client/backend do not have user interfaces.
– Client: Linux OS required
– Dashboard: Laptop, PC, tablet, phone
DYAMAND could be used to interconnect sensors/devices/machines/robots in the use-cases to allow seamless communication between them
Supporting human workers in predicting or preventing potential failures and incidents; supporting human workers in planning services and repairs.
(non) functional requirements, especially with respect to real-time responsiveness
Linux, Windows, MacOS
Our app consists of several components (docker images). To use it in a comfortable style, we suggest at least 16RAM, and 60gb of disk space (in order to store OracleDB, InfluxDB, Kafka, Orion). OracleDB spaces is increasing in time in estimate way +/- 0,045MB per one operation (unstable value).
Due to Volkswagen security politics, we exchange data between microservices with JWT token. All users, who wants to use our app, should be logged in via LDAP server. In develop mode we can use test user. All address and ports are protected. App is running in internal network without access to external network but it is not required.
LDAP authentication is required.
Everywhere when docker images can be hosted – there is no limitation.
Instructions can be found in our bitbucket repository with whole project.
Provided upon request.
The application has a graphical user interface for viewing current waveforms and checking potential anomalous waveforms. The user can view, sort and filter the results. He can also report his own anomaly event if he deems it necessary. The application’s capabilities are limited only to viewing waveforms and metadata resulting from current waveforms. Data is obtained from the backend using REST API and websockets.
Currently two potential use-scenarios for this component have been identified:
1. Prediction of failures of a car body lift used in production process. Identification of repair time – which should result in reducing unnecessary interventions by human workers and, at the same time, in preventing future failures.
2. Prediction of repair and maintenance (e.g., cleaning) interventions in parts of the paintshop. Detection of dependencies between observed changes in measurements and quality of paint structure. Again the purpose of this scenario is to reduce unnecessary interventions by human workers and, at the same time, to prevent failures.
Available upon request
Supporting human workers at production lines by monitoring quality.
Input requires template image and test sample.
OpenCV, Python, Flask. The component can be run with Docker image, then no additional installations are required.
A standard PC have enough computing power for this component, as no machine learning strategy was used.
The components’ API should be accessible only in local or private network.
PC at the operator stand (in pilot), however this can be also run on the remote server.
FIWARE must be already up and running. To deploy the component, you can run the docker image directly or go to project folder and run `docker-compose up -d`.
There is no user interface – other components should call API function.
In Bosch Pilot AR-CVI was used as GUI.
Dedicated application for laptop or desktop PC.
Supporting human workers on production line.
AR (augmented reality) application precision of measured distances an location of simulated objects – we expect that AR application accuracy will be between 1cm-5cm per 5m
Simulation module based on “LogABS” for Windows
AR application – native app for Android or iOS
Intel i7 CPU with 16GB RAM or better for LogABS
Mobile device compatible with AR Core or AR Kit for AR application
No specific security requirements.
No specific privacy requirements.
PC and mobile device.
Instructions can be found in the code repository with whole project.
Otherwise provided upon request.
Windows GUI for planning and executing logistic simulation, AR app for planning factory locations and equipment.
Laptop, PC, mobile device compatible with AR Core or AR Kit
Planning new, safe AGV routes for redesign factory outline e. g. new product is about to enter into production and factory needs to be redesigned to maintain new storage spaces, new machinery etc.
Internal repo, available upon request
Assistance and training for operators during customised product assembly process, and maintenance operations including recognition of objects, sequence of operations and AR guidance to operators.
– Mobile devices should be compatible with ARCore/ARKit frameworks.
– Wi-Fi connection is required at the shop floor.
– Minimum brightness levels are required for the AR vision algorithms to work correctly.
– If workers are required to wear gloves, specific mobile device models will be required.
WebXR, Django, ARCore, Arkit, Microsoft Mixed Reality toolkit
Mobile devices/HoloLens. A server
The component need an account management. There is one already implemented but in case to be integrated on RAMP or another platform, further adjustments should be needed.
Private cloud provided by TECNALIA with access to pilots. In case there is a need for pilot sites to deploy the component there, it can be done using dockers.
In RAMP marketplace and upon request.
– Interface for the developer: an editor to create AR content in an easy way to guide operators in the shop floor.
– Interface for the Operator (customised by the editor): the AR guidance to be visualised with a mobile devices/HoloLens through different steps and different objects (2D/3D objects, images, video, documents, animations, etc.).
Laptop + mobile devices/HoloLens
AR guidance to operator during the assembly of the base plate in collaboration with a robot in the Siemens Use case.
– The shop floor manager/developers add a new manual to the system using the editor, adding all the multimedia assets (3D files, pdf, videos, photos, etc.) and defining, step by step, the entire manual.
– The shop floor manager, once the manual has been added, defines through the editor what type of trigger will activate the augmented reality display. In this case, through the Context Broker we will see in a task for HoloLens is raised to show the different steps of the manual.
– Then he publishes from the editor that manual, so that it can be consumed by HoloLens, defining the type of users and roles that will have permission to do so.
– From that moment any worker of the plant with permissions working on the assembly of a base plate will be able to trigger with his device the visualization in augmented reality of that manual based on the status of the collaboration with the robot supporting him/her.
Not available. As stated in the CA, the tool is not open source so no code will be provided.
Supporting human workers on the shop floor: Tele-assistance in maintenance for long distance workers
An Internet connection (Wi-Fi connection recommended).
Android app: ARCore framework is needed to use the AR functionalities.
Browser client: right now, the best/recommended browser to use it is Firefox.
Depending on the resolution of the real time streaming, the bandwidth has to be in accordance with it, with higher resolution is recommended to use Wi-Fi-connection.
Server side: NodeJS, Websockets, Express
Client side: ARCore
Server side: UNIX/Linux environment, with enough bandwidth for the number of users to use it
Client side: smartphone device compatible with ARCore; browser can´t be Chrome (Firefox recommended)
The component need an account management. There is one already implemented but in case to be integrated on RAMP or another platform, further adjustments should be needed. Server side must use an SSL certificate to send the communication data with https protocol.
Right now, the authentication part only requires having the client application to log in, so it is necessary not to share them with unknown users.
Android app client
Laptop + mobile deviLaptop + mobile devices/HoloLens/HoloLens
UC2 in Arcelik for equipment maintenance.
A worker that needs support with any type of physical component or machine, call to an expert colleague in the field. In order to do that, the worker opens the application installed in his smartphone and calls the previously connected expert. The application connects with the expert sharing the back camera of the worker and the frontal camera of the expert. The worker scans the component area or the machine area, and the expert draws through the mobile screen creating indications, as a drawing mode to the worker, giving an augmented reality support. When the support is finished, both exit the application.
It will allow workers to be trained in the operation of a machine, or manufacturing line for example, through the use of virtual reality. With this web tool it will be possible to create and consume immersive VR experiences (with glasses) oriented to training.
If video 360 are big files web creator tool will require a PC/Laptop with dedicated (and “good”) graphic card. We will specify more.
Wifi connection is required during training experience creation and consumption.
PC/Laptop and VR HMD should contain a WEBXR compatible browser.
WebXR, Django, WebGL
PC/laptop and VR HMDs
PC/Laptop + VR HMDs
Training the operators when using new machines. At this moment we do not have any pilot interested in the component.
MPMS includes the functionality to design processes and describe agents, and execute in automated way the processes by assigning activities to agents. It provides orchestration of activities in a global level, i.e., covering all work cells/production lines of a factory.
As a logical functional component, MPMS shall be able to:
Automatically execute a sequence of activities
Monitor agents’ availability
Monitor agent’s performance including at least task estimated completion time and task actual completion time
Monitor process current state
Provide right information to agents to perform a task
Handle exceptions on agent, task and process level by halting/resuming their activities and initiating out-of-normal action processes
(re-)allocate appropriate agents to perform a task based on abilities, skills, authorizations, cumulative workload, overall manufacturing system status and availability
Re-allocate agents in response to external events such as safety alerts or sensor failures.
As a software technical component, MPMS shall be able to:
Provide a modeler application to model processes
Provide a process engine to automatically enact process models
Provide tasklist applications to deliver tasks to human operators
Support integration to custom UIs as tasklist applications
Provide integration to local components to deliver tasks to robotic agents
Support various platform environments
Support various DBMS
Be deployed both on premise/cloud
Provide security/authorisation mechanisms
Integrate to middleware/context broker and other components
Support web services
Support REST/JAVA APIs
Support SOA/Interoperability (NF)
Be robust (NF) be runtime scalable (NF)
Be easy to use by both process modelers, developers and end users (e.g., human operators) (NF)
MPMS is built on Camunda Platform 7.15.0, Community Edition and runs in every Java-runnable environment. It can support the following environments:
Container/Application Server for runtime components
Apache Tomcat 7.0 / 8.0 / 9.0
JBoss EAP 6.4 / 7.0 / 7.1 / 7.2
Wildfly Application Server 10.1 / 11.0 / 12.0 / 13.0 / 14.0 / 15.0 / 16.0 / 17.0 / 18.0
Databases MySQL 5.6 / 5.7
MariaDB 10.0 / 10.2 / 10.3 Oracle 11g / 12c / 18c / 19c
PostgreSQL 9.4 / 9.6 / 10.4 / 10.7 / 11.1 / 11.2 postgres:14-alpine (Docker image)
Microsoft SQL Server 2012/2014/2016/2017
Adminer:4.8.1 (UI for DB management) (Docker image)
Google Chrome latest
Mozilla Firefox latest
Internet Explorer 11
Java 8 / 9 / 10 / 11 / 12 / 13 (if supported by your application server/container)
Oracle JDK 8 / 9 / 10 / 11 / 12 / 13 IBM JDK 8 (with J9 JVM) OpenJDK 8 / 9 / 10 / 11 / 12 / 13
Openjdk:11.0.13-jre-slim (Docker image)
Windows 7 / 10
Mac OS X 10.11
Ubuntu LTS (latest)
The Camunda Community Platform is provided under various open source licenses (mainly Apache License 2.0 and MIT). Third-party libraries or application servers included are distributed under their respective licenses. Detailed info on licences is provided in T7.2.
For deploying MPMS in a local desktop PC, not any special requirements are needed. A powerful processor, a lot of RAM memory and a decent graphics card is sufficient. The following specs should do:
Processor: Intel Core i7-7700 @ 3.60GHz / Intel Core i7-6700K @ 4.00GHz / Intel Core i7-7700K @ 4.20GHz / Intel Core i7-8700K @ 3.70GHz
Storage: SATA 2.5 SSD (e.g. 256 GB) RAM: 32GB DDR4-2133 DIMM (2x16GB) (well, even 16GB will not be a problem)
Graphics Card: Any modern standard graphics card
Monitor, keyboard and mouse are essential. Touchscreen for operators might be handy.
A laptop could also work:
Processor: Intel Core i7-7700HQ @ 2.80GHz / Intel Core i7-6770HQ @ 2.60GHz.
MPMS connects to DBs with password access.
Users of web applications have access with password.
Agents (and specifically human operators) should be described with due diligence wrt privacy data.
MPMS shall be deployed on local PCs on premises. It can also be deployed on a cloud server (e.g., on TUE premises) but extra security is required.
Deployment instructions and user manuals can be provided upon request.
Can also be uploaded on RAMP if there’s a specific repository.
Three different types of users:
They use the Modeler application to model manufacturing processes (with BPMN 2.0)
They turn the process models designed by the process modelers into executable process models (i.e., the ones that the Process Engine can interpret and enact).
Also, they can build custom applications (e.g. tasklists, cockpits, smartwatch apps)
(Human) Managers and operators (end-users)
Managers use the Cockpit/Dashboard and Admin applications (default or custom) to see the processes state and manage the users of MPMS
Operators use the Tasklist applications (default or custom) to receive tasks and provide input (e.g. task completion confirmation)
For each type of user, there are manuals and webinars to provide instructions.
Modeler runs on PC/Laptop (see SW/HW requirements above)
Process Engine runs on PC/Laptop (see SW/HW requirements above)
Web applications run on PC/Laptop/tablet/smartphones (additionally, a prototype tasklist application has been built for smartwatch)
MPMS can be used in any pilot/open call for:
process modelling (for bottlenecks identification and enabling automated execution),
dynamic agent allocation,
automated process execution,
integration to other IS (e.g., ERP) for getting the right information and providing it to agents during execution,
process status monitoring,
task monitoring for job safety and quality of human operators,
re-allocation of agents when job safety and quality criteria are violated,
The component provides visual support in manual assembly and inspection tasks. Human worker is guided by the visualized instructions while performing the associated tasks. The component can project the instructions on a surface or a screen. This provides a clean and structured working environment.
The Fiware messages are checked at 2 Hz.
Ubuntu 18.04 or later (for running on Ubuntu)
Docker version 20.10.6 (Previous versions later than v19 are also supported)
Windows X Server (for running on Windows)
A screen with at least 1920×1080 (HD) resolution support
A projector [Optional] with at least 1920×1080 (HD) resolution. Front surface mirrors might be necessary for projecting down to a table. High brightness according to the illumination of the environment (~4400 Lumens)
A PC or laptop (Preferably a 4-core CPU with 8 GB RAM, 15 GB HDD space)
No specific security threats
The component should be executed in the assembly cell pc (local deployment). Necessary files can be mounted to the Docker container from the host machine or from a remote machine.
Currently in the public Github repository: https://github.com/emecercelik/ar-cvi/ Later the instructions will be available in RAMP.
The instructions are displayed in a full-screen mode.User can provide inputs to the component using the buttons on the screen if defined with the display messages (templates).A screen view is provided below:Instruction images, PCB quality check outputs (provided by another component), a written instruction, and buttons (at the bottom).For a developer demonstration refer to the following video:https://syncandshare.lrz.de/getlink/fiC2jAGB9vBsmaRqtVVmzEg4/ar-cvi_demonstration.mp4
PC, Laptop, Projector, Screen
Support on the assembly cell for the human worker by displaying manuals of an assembly or inspection task. The operator can provide inputs to start the inspection (provided by another component).
It supports the process management improving the interoperability of the system. It addresses the ability of the system to be standard interoperable.
In the case of OPC UA communication, the server must be configured and deployed. The Node ID (s) must be provided to be able to read the information in real time.
Node JS and/or Java but it can be developer also with other programming languages.
Docker and Docker compose
Orion LD Context Broker
16 Gb RAM, 10 Gb HD
Linux – Ubuntu
It wraps REST APIs in another descriptor, the component does not manage the security of the APIs described. The server that serves the descriptor will be secured through HTTPS and certificate.
The component should be executed in a local PC on the shopfloor
We provide a docker container with a configuration file.
In this version a Command Line Interface has been implemented. It is expected to develop a graphical interface for the next version.
A pilot or a SHOP4CF component owner wants to provide to the external word some functionalities he wants to test with external users (for instance for open calls), he builds a WoT (Web of Things) interface with our component that wraps its component to be used by third party developers.
This open call is not active yet. But you can fill out this form to join our newsletter and be notified when the application process opens. It will only take you 5 minutes to be ahead of everyone else!
In the meantime, you can directly contact us at firstname.lastname@example.org. Thank you!