Question & Answers

PERASPERA / FAQ – Frequently Asked Questions

(This FAQ and Web portal reflects the PSA consortium’s view. The EC and REA are not responsible for any use that may be made of the information it contains)

 

Q1 [PERASPERA PSA]: What is PERASPERA?

A: The “Space Robotic Technologies” (PERASPERA) is the PSA (Coordination and Support Action) of the H2020 SRC on space robotics. Its main functions are: elaborating an SRC roadmap and implementation plan; providing advice to the Commission for the SRC calls documentation for Operational Grants, contributing to the assessment of progress and results of the Operational Grants, Supporting on the general SRC implementation.

Q2 [COMPET-4-2016]: Where to find the H2020 2016 call documents for the SRC Space Robotics?

A: The European Commission has published the final version of the 2016/2017 Space Work Programme on their Space homepage and in the Particpant Portal under COMPET-4-2016-Space Robotic Technologies.

  • 2016-2017 Work Programme:

http://ec.europa.eu/programmes/horizon2020/en/h2020-section/space

  • Guidance documents for the COMPET-4-2016 SRC Space Robotic Technologies:

http://ec.europa.eu/DocsRoom/documents/13504/attachments/1/translations/en/renditions/native

  • Complete information for COMPET-4-2016 at the Participant Portal:

https://ec.europa.eu/research/participants/portal4/desktop/en/opportunities/h2020/topics/2241-compet-4-2016.html

Q3 [COMPET-4-2016]: Is it possible to present or being part of different proposals for the same H2020 topic? What are the admissibility and eligibility criteria?

A: Yes is possible to present or being part of different proposals addressing the COMPET-4-2016 topics; and for more details about admissibility and eligibility criteria, visit the Participant Portal.

http://ec.europa.eu/research/participants/docs/h2020-funding-guide/grants/from-evaluation-to-grant-signature/evaluation-of-proposals/elig_eval_criteria_en.htm

Q4  [COMPET-4-2016]: What is the budget dedicated by the EC to the SRC Space Robotic Technologies in the 2016 Call, and to the subsequent calls?

A: The Indicative budget for the COMPET-4-2016 2016 Call is 18 M€.

For the moment there is not indicative budget for the 2018 call and future SRC Calls, their indicative budget and topics will be defined in future space H2020 Work plans and its calls.

Q5  [COMPET-4-2016]: What are the space H2020 call 2016 tendering period, and its deadline?

A: The space H2020 2016 call opening date is 10-November 2015, and the deadline is 03 March 2016.

Q6  [All OGs]: How the different Operational Grants will be coordinated among them as part of the H2020 SRC Space Robotic Technologies?

A: All partners of the Operational Grants and the partners of the PSA PERASPERA will later sign a Collaboration Agreement in order to ensure that results of the individual projects can be used to achieve the overall SRC objectives. A template for such a Collaboration Agreement is available on the web at:

http://www.h2020-peraspera.eu/wp-content/uploads/2015/12/Collaboration-Agreement-Space-Robotics-Technologies-SRC.pdf

Q7 [All OGs]: Will the bids be considered non compliant/none responsive if they do not meet the requirements/objectives outlined in the Guidelines?

A: These guidelines are intended to assist applicants in preparing their proposals in the context of the Strategic Research Cluster. The supporting document provides indicative guidance for the applicants, in order to optimally align their proposals to the Strategic Research Cluster objectives.

In case a proposal chooses to deviate from what is expected from an Operational Grant (as outlined in the guideline), this choice should better be well justified and explained.
A proposal that does not fully answer to the expectations in the Guidelines will not be automatically declared out of scope and will be evaluated by Independent Experts. In any case, it is up to the Expert Evaluators to assess the degree to which the expectations are met by the proposals and the impact of potential deviations.

Q8 [OG2] [OG3]: The path planning functionality is included in OG2 (AF) or OG3 (CDFF)?

A: The path planning function is included in OG2. The OG2 AF will create a semantic model of its environment, describing the understanding of the rover’s situation with respect to surface-level hazards, objects, terrain and sites of interest in order to (re)plan optimum navigational paths and will implement the best possible course of navigation or action relative to the situation of the platform.

The OG3 (CDFF) is in charge of building a 3D model of the environment and the “Navigation” term corresponds here to the N letter in the GNC, that intends to provide the geometric/kinematic/dynamic state of a machine with respect to a reference frame (e.g. inertial frame) or its local environment through an estimation process that relies on a combination of processed data coming from absolute and/or relative sensors and a priori information.

Q9 [OG2] [OG3]: The Opportunistic Science capability is included in OG2 (AF) or OG3 (CDFF)?

A: The task of identify one object for scientific interest (by color, shape, composition etc) is included in the OG3 but it is the OG2 which take the decision of going to the scientific interest object and do the planning/(re)planning.

Q10 [OG6]: A proposal for the OG6 (VPFT) can address only the orbital track test facilities?

A: No, OG6 must address both validation scenarios (planetary & orbital) with the necessary number of testbeds to fulfill all the requirements of both validation scenarios.

Q11 [OG6]: OG6 (VPFT) shall perform every necessary test?

A: OG6 shall provide the test facilities, but also allocate the necessary resources to assist the on-site support needed by the facilities and platforms for the validation of SRC activities. The validation tasks of the SRC technologies are outside the scope of the OG6 (except for the above mentioned on-site support).

Q12 [COMPET-4-2016]: Is it possible to make a proposal that address several OGs for example OG3 and OG4?

A: It is possible to make a proposal that address 2 OGs but it is not recommended because this situation creates difficulties in the evaluation process and decrease the possibility of success

Q13 [OG4]: Is the final deliverable of OG4 a complete, implemented hardware demonstrator sensor suite with all sensor elements mentioned on pp. 20-21 –SRC_Guidelines_Space_Robotics_Technologies (COMPET-4-2016).pdf? (“In detail, the reference implementations of the INSES shall include at least the following instruments on board of the robotic vehicle or spacecraft: ..”) 

A: The expectation from this OG is indeed as outlined in the specified pages.
As mentioned in the above, deviations from this expectation should better be justified and will be assessed by the Expert Evaluators.

As a matter of clarification, all types of sensors must be actually considered at this stage in the OG4 study. It is expected that:

  • the study will allow to establish the relevance of each type of sensor for the two reference implementations
  • the final delivery for each reference implementation shall include only the relevant sensors
  • the study leads to a “plug-and-play” (interface) solution for different types of sensors

It is assumed that the work will concern only the design of the Universal Interface Unit(s) (one for each reference implementation) since the different sensors will be COTS. They will be selected taking into account their respective mass, power and volume characteristics.

Most of the standardization work will therefore focus on the interface unit (data protocol, electrical, mechanical). However mechanical standardization will be considered also for the different sensors (to be treated independently from the Interface Unit since the sensors in most applications will be non collocated).

Q14 [OG4]: Is the objective of OG4 to develop the unified mechanical, data and power interfaces for all identified sensors, including potential redesign of the sensors to allow for such interfacing with INSES?

A: Yes, as explained in p.19. Sensor redesign may be applicable if necessary to meet the expectations in the Guidelines document.
The development of a unified mechanical, data and power interfaces for all identified sensors as detailed hereafter

  • a specific mechanical interface for the Universal Interface Unit – this Unit will unify all relevant sensors (for each reference implementation) from a data and power point of view
  • a specific mechanical interface for each type of sensor (or possibly one for several types)

This will allow the introduction of a flexible plug-and-play principal for sensor systems. It is not realistic from the budget point of view to redesign the sensors. As indicated here-above, the sensors to be included will be COTS

As mentioned in the long version of the guideline on the PERASPERA website, adaptation of existing COTS features may be necessary.

Q15 [OG4] [All OGs]: Will the reference implementations (Orbital, Planetary) remain in hardware with INSES for the next phase, or are leased sensors sufficient to validate its performance in the common demonstration scenarios? 

A: The common building blocks will need to demonstrate ability to be integrated together. It remains to be defined precisely how the outputs of the first phase of OGs will be used in the next phases. Such matters will also be regulated by the Collaboration Agreement.

The objetive is to keep the sensor hardware for the activities to be pursued in the subsequent calls (this is particularly true for the Planetary Track where only ground demos are targeted). It can be envisioned however to reduce the procurement cost by not duplicating the sensors that might be present in both reference implementations.

The UE will remain the owner of all OGs deliverables.

Q16 [OG4]: Does OG4 entail any kind of software development other than the basic needs for interfacing, sensor control and data provision to OG3?

A: It is expected that the OG4 addresses the necessary developments as outlined in the guidance document (further detailed in the long version on the PERASPERA website e.g., RCOS software components).

Essentially, the software development within OG4 shall concern the basic needs for:

  • interfacing with other OGs (commands sent by OG2, data to be provided to OG3),
  • sensor control,
  • collection of sensor data
  • control of potential other devices (illumination devices)

Some low level data processing might be envisioned. The real need will be identified during the preliminary design where the detailed interface between OG3 and OG4 will be defined.

Q17 [OG3] [OG4]: We try to determine whether our current project of standalone instrument for the orbital rendezvous that incorporates not only data acquisition but also high-level treatment to extract the desired measurement (features relating to both OG3 and OG4) is admissible for the COMPET-04-2016 call.

A: First of all, we draw your attention on the text of the compendium and particularly the summary of OG3 and OG4 challenge description:

  • OG3: The challenge is the development of a software framework implementing data fusion techniques for various sensors such as LIDAR, Imagers, radar, sonar, IMUs, sun sensors, etc.
  • OG4: The challenge is to realize a suite of perception sensors that allow localization and map making for robotic inspection of orbital assets (under space representative conditions) and for planetary surface exploration.

This short summary clearly states that the SW development foreseen in OG3 aims at dealing with a large variety of sensor data types and not focusing on a particular sensor. In the same way, the sensor suite to be considered in OG4 must deal with every kind of the specified set of sensors and not a single one. As written further down in OG4 description, the emphasis is put in the inspection suite modularity and flexibility:  INSES shall be implemented with high modularity and flexibility in order to allow the sensor suite sub-set re-configuration. Each sensor unit shall be designed to be self-consistent with unified interfaces in order to be added/removed easily, without affecting the overall system functionality (plug-in like concept)”.

In addition, the relevance of addressing two OGs in the same proposal is evoked in this FAQ page in a previous question Q12

Q18 [All OGs]: Given the suggested milestones for OG3 in the Call document, Task4 is scheduled to finish in Month 22 and the final project presentation is on Month 27 which means Task5 is carried out between Month 23 and 27. However the schedule table also indicates that a final acceptance meeting should take place in Month 24. This is only 2 months into the Task5 activities. We are not very sure what is expected to be delivered/completed at this final acceptance meeting. Can you help to clarify please?

A: Because we are dealing with six different projects that need to be coordinated and aligned, and the products integrated in a timely and efficient manner, we in the PSA envisage that there will be natural teething problems with all the projects; in short, we don’t anticipate all the projects starting at the same time because of the amount of cat-herding involved. As such, please note the word “tentatively” before 24 months. It would be dependent upon the progress of the other OGs, particularly OG6. In any instance, your progress will be monitored by the SRC Board and any such flexibility can be discussed.

Q19 [OG3] [OG4] [All OGs]: Should we show, at the end of the project, that we are mastering the whole set of sensors and that these sensors can offer the required interface and modularity capabilities or should we build a complete robotic system including data processing and fusion functionalities ?

A: The first part of the alternative is the right one. It is definitely not foreseen within OG4 to build a robotic system with data processing and fusion capabilities since this would overlap with OG3 activity.

Nevertheless, it shall be proven that the OG4 end product can be interfaced with OG3 functionalities to demonstrate various capabilities of perception/navigation system. This type of activity will be performed within OG6 in coordination with all relevant OGs (OG3 for the most part). The elaboration of the demonstration requirements will be also elaborated in a collaborative mode.

Bidders attention must be drawn on what is indicated on page 5 of the “SRC Guidelines Space Robotics Technologies (COMPET-4-2016)” document:

“Even though the integration of OG outputs is in the future, all OGs need to interact in order to prepare for the future integration. The interaction among OGs will be based on:

  • Cross delivery of interface specifications
  • Cross delivery of preliminary/final outputs
  • Integration and demonstration of the final outputs of OGs in common test platforms

Each OG Consortium is expected to nominate an “interface engineer” in order to coordinate establishment and maintenance of interface specifications with other OGs.”

Q20 [COMPET-4-2016]: Some documents seem to have a similar target, especially the D3.4 Master Plan of SRC activities and the Guidelines for Strategic Research Cluster on Space Robotics Technologies. Could you explain/advise how should we interpret each of them and/or on which shall we focus to prepare the proposal?

A: The Master Plan of activities was the internal document that we had to deliver to the Commission to define the call. The COMPET-4-2016 Guidelines (Space Robotic Technologies) is the “official” document for the Call 2016. The ‘Guidelines’ has been complemented in a more detailed document (but not official) Full Compendium Call 2016 Space Robotics Technologies with the purpose to clarify the technical details that were not covered in the ‘Oficial Guidelines’.

Also a video detailing the key aspects to be covered in the proposal has been upload in the post COMPET-4-2016 Video Briefing for Applicants and Evaluators

Q21 [OG2]: To which extent does the Autonomous Framework (AF) needs to provide support for collaboration between robots. The following reference in the planetary track “Rendezvous with the other planetary asset and achieve a final relative configuration that is compatible with the transfer of the module/sample to the desired destination by a manipulator arm” is a bit ambiguous.

A: I assume you’re referring to the specific phrase “other planetary asset”. In this case “asset” may mean: the lander / launcher; a sample that has been preserved by another robot that has visited the surface previously; a scout rover; any other man-made artefact. We do anticipate that robotic collaboration will be enabled by these technologies at some point in the future, but at this point, and for the purposes of this demonstration, it is simply one robot which we wish to see demonstrating autonomous planning capability. However, any allusion to the future exploitation of this technology in other key areas, such as machine<>machine collaboration, would not do any harm to the bid.

Q22 [OG2]: Is it necessary to implement the ground station and the user interfaces to demonstrate the different levels of autonomy (tele-presence, tele-manipulation, semi-autonomy, full autonomy) inside OG2?

A: It is not necessary for OG2 to implement the ground station. The OG2 shall be able to accept different levels of autonomy but the external control is outside the scope of OG2. For example, if the rover are going to be tested in tele-manipulation, the OG2 must be able to work and manage the resources with this level of autonomy, but is the OG6 which has to provide the necessary means for the tele-operation (joystick, graphical interface etc).

Q23 [OG2] [OG6] [All OGs]: What kind of testing is expected for OG2: simulation or with the robotic platform provided by OG6. In case it is the second, what kind of test should we think about? The problem is that, while this 2016 does not aim to fully integrate all OGs, OG2 strongly depends on OG1, OG3 and the rover or spacecraft. In consequence, if these are not integrated, then the OG2 would need to develop basic functionalities.

A: The test scenario for OG2 (and all other OGs) must be provided by OG6 in the test facilities (orbital servicing and planetary exploration platforms). It is the responsibility of OG6 to provide test facilities that are able and appropriate for all OG testing. That would include provision of a Mars-yard, a rover, etc. but also the operative system in which will be running the Autonomy Framework (OG2).

The full integration of the OGs are not going to be covered in this first call, but is for example the consortium of OG2 (AF) and OG1 (RCOS) consider that is better to do the final testing running the Autonomy Framework (OG2) over the RCOS (OG1) instead of other operative system this integration will be really welcome by PERASPERA PSA. Full integration is beyond the scope and funding of this call, but partial integration is recommended, as it will strengthen the ties across the SRC.

Q24 [OG1] [OG2]: Conflict between Functional layer and RCOS regarding FDIR, you mentioned in a previous email that in this call we should focus on the FD. However, the question remains on how to split FD responsibilities between OG1 and OG2?

A: the Autonomy Framework is responsible for the management of the system resources, IMO the AF would look after this. The OG1 will be responsible for the management of the faults of the operative system and the recovery.

Q25  [OG4]: Is the final deliverable of OG4 a complete, implemented hardware demonstrator sensor suite with all sensor elements mentioned on page 20-21 – SRC_Guidelines_Space_Robotics_Technologies (COMPET-4-2016).pdf? (“In detail, the reference implementations of the INSES shall include at least the following instruments on board of the robotic vehicle or spacecraft: ..”)

A: Yes, and this suite will be subjected to tests as defined by OG6. However, note that for the reference implementation, you would need to provide an integrated test module as a fully functional representative of space models (Bread-board / DM / STM / EQM). See this paraphrased text from the PSA Technical Compendium:

“In particular: procure / manufacture all parts needed to mechanical and thermal interfaces of each sensor to agreed standard one. Also, procure / manufacture all harnesses & connections connecting the sensors to the rover / spacecraft.”

Q26 [OG4]: Is the objective of OG4 to develop the unified mechanical, data and power interfaces for all identified sensors, including potential redesign of the sensors to allow for such interfacing with INSES?

A: See Q25. Note “development” may include adaptation of COTS components where environmentally viable.

Q27 [OG4]: Does OG4 entail any kind of software development other than the basic needs for interfacing, sensor control and data provision to OG3?

A: The sensors will require software components compliant with the Operating System (OG1), so as to communicate with other software components, ie from the Autonomy Framework, or the Data Fusion Framework. The sensors may also be required to undertake some basic preprocessing / filtration of data before passing it along to the Data Fusion Framework for full processing.

Q28 [OG4]: The OG4 mentioned contact sensors for exteroceptive category which are not mentioned in OG3. Are they referring to the force/torque sensors? 

A: Generally speaking, the contact sensors allow to detect an interaction between a part of the robot and some other object that belong to the environment (ex: claws of the gripper touching an object). Of course, in some rare situations, trying to detect a possible contact between the robot and some other part belonging to the robot itself cannot be excluded. Nevertheless, it is most reasonable to state that the contact sensors belong to the exteroceptive category.

An exteroceptive sensor is usually considered as a mean to get information about the outside world. It contributes to determine the measurements of objects relative to the vehicle’s frame. From this standpoint, the difference between a contact and a tactile sensor concerns the amount of information available (which is indeed richer with the tactile sensor). This criterion should not be used to sort the sensors into proprio and exteroceptive categories.

On the other hand, proprioceptive sensors are responsible for controlling the vehicle’s internal status (ex: IMU, GPS, …) and monitoring self-maintenance (ex:  battery, current, heat monitoring…). According to this definition, torque sensors while used to determine the vehicle’s attitude (slip estimation for rovers application) belong to the proprioceptive sensors whereas torque sensors used as contact sensors belong to the exteroceptive category as explained hereabove.

Q29 [PERASPERA PSA]: What is the PERASPERA PSA contact point for questions? How to give feedback on the PERASPERA Web portal, on its content, its structure and how to propose “news” as Post candidates that could be published on the PERASPERA Web portal?

A: For matters regarding: Question, Web portal content and Posts:

For technical questions on the website implementation: