Frequently asked questions

Services

How to use the OGC WMS service?

The Web Map Service (WMS) is a standard protocol for serving pre-rendered georeferenced map images over the internet. VITO applies to the OGC WMTS Implementation Standard (OGC 06-042). It is possible to access the WMS directly by using a simple web browser or desktop tools such as QGIS. VITO Sentinel WMS URL: http://sentineldata.vito.be/ows?service=WMS&request=GetCapabilities The available layers are: CGS_S2_NDVI CGS_S2_FAPAR CGS_S2_RADIOMETRY_BROWSE Example: http://sentineldata.vito.be/ows?SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&LAYERS=CGS_S2_RADIOMETRY_BROWSE&FORMAT=image/png&TIME=2017-05-19T09:05:30.000Z&SRS=EPSG:4326&WIDTH=1920&HEIGHT=800&BBOX=1.7630521849289855,50.91914684746945,1.9539396361008605,50.99861922054515

How to use the OGC WMTS service?

The Web Map Tile Service (WMTS) is a standard protocol for serving pre-rendered georeferenced map tiles over the internet. VITO applies to the OGC WMTS Implementation Standard (OGC 07-057r7). 

It is possible to access the WMTS directly by using a simple web browser or desktop tools such as QGIS.

Terrascope WMTS URL:

https://services.terrascope.be/wmts?service=WMTS&request=GetCapabilities

These are layers that are available for you to use:

Satellite/service Layer Name Description
Sentinel-1 CGS_S1_GRD_SIGMA0 Terrascope Sentinel-1 GRD σ0, per overpass, 10 m resolution
Sentinel-2 CGS_S2_FAPAR Terrascope Sentinel-2 FAPAR, per overpass, 10 m resolution
  CGS_S2_FCOVER Terrascope Sentinel-2 FCOVER, per overpass, 10 m resolution
  CGS_S2_LAI Terrascope Sentinel-2 LAI, per overpass, 10 m resolution
  CGS_S2_NDVI Terrascope Sentinel-2 NDVI, per overpass, 10 m resolution
  CGS_S2_NIR Terrascope Sentinel-2 false colour infrared, per overpass, 10 m resolution
  CGS_S2_RADIOMETRY Terrascope Sentinel-2 true colour, per overpass, 10 m resolution
PROBA-V PROBA-V_S10_TOC_COLOR PROBA-V true colour, 10 day composite, 300 m resolution
  PROBA-V_S10_TOC_NIR PROBA-V false colour infrared, 10 day composite, 300 m resolution
  PROBA-V_S5_TOC_100M_ COLOR PROBA-V true color, 5 day composite, 100 m resolution
  PROBA-V_S5_TOC_100M_NIR PROBA-V false color infrared, 5 day composite, 100 m resolution
Copernicus Land Service CGLS_FAPAR300_V1 Copernicus Global Land Service – FAPAR, 10 day composite
  CGLS_LAI300_V1 Copernicus Global Land Service – LAI , 10 day composite

To call up just a single tile, use this kind of url:

https://services.terrascope.be/wmts?layer=CGS_S2_RADIOMETRY&style=default&tilematrixset=g3857&Service=WMTS&Request=GetTile&Version=1.0.0&Format=image%2Fpng&TileMatrix=9&TileCol=262&TileRow=171&TIME=2018-02-25T10%3A50%3A18.000Z

Sentinel 3

When will the Sentinel-3 data be available?

Sentinel-3 synergy (SYN) products will be made available through the Terrascope platform as soon as they are officially released by ESA. According to the latest Sentinel-3 mission status reports, SYN products based on Sentinel-3A data are currently only available to expert users with an official release planned after upgrades of the processor baseline. Furthermore it should be noted that both Sentinel-3A and 3B acquisitions are required for fully operational SYN products comparable to the VGT and PROBA-V S1 and S10 products.  Sentinel-3B is currently in the commissioning phase with routine operations expected to start in 2019. 
VITO is a member of the Sentinel-3 Mission Performance Center (MPC). As Expert Support Laboratory (ESL), VITO provides expert analyses with respect to the OLCI L1 radiometry and the SYN VGT products. Within the ESL Level 2 LAND VAL group, the objective is to validate the Sentinel-3 SYN VGT products and to verify the similarity of these products with the combined time series of SPOT-VGT and PROBA-V.

Sentinel 2

How does the classification map look like?

All products are delivered together with the classification map which gives an indicating on the pixel quality of the delivered product. The different values and their meaning are given in the table below.

Label

Classification

0

NO_DATA

1

SATURATED_OR_DEFECTIVE

2

DARK_AREA_PIXELS

3

CLOUD_SHADOWS

4

VEGETATION

5

BARE_SOIL

6

WATER

7

CLOUD_LOW_PROBABILITY

8

CLOUD_MEDIUM_PROBABILITY

9

CLOUD_HIGH_PROBABILITY

10

THIN_CIRRUS

11

SNOW

Table: Pixel quality classification map

What are the processing steps for the Sentinel-2 products?

The figure below shows the high level diagram on the Sentinel-2 processing workflow performed in this context at VITO.

More information on the different processing steps can be found in the VITO Sentinel-2 Products User Manual.

 

What is the file format of the Sentinel-2 derived biophysical products?

All Sentinel-2 image files are delivered in the GeoTIFF format. The accompanied metadata file is in XML format following the INSPIRE metadata standard (ISO19115).

VITO offers 4 Sentinel-2 vegetation indices or biophysical parameters: fAPAR, fCover, LAI, CCC, CWC and NDVI. All the products of these vegetation indices contain 6 files. Four datafiles (including the CloudMask ShadowMask and the Scene Classification), a metadata file describing the data (INSPIRE format) and a Quicklook image.  All the files, except the Scene classification files (20meter resolution), have a spatial resolution of 10 meter.

The table below lists the technical information of the Sentinel-2 derived products providing information on how to calculate the physical values from the digital numbers available in the files. The physical number can be defined by using the following formula:

Physical Number= Scaling * Digital Number + offset.

 

FAPAR

FCOVER

LAI

NDVI

Physical min

0.0

0.0

0.0

-0.08

Physical max

1.0

1.0

10.0

0.92

Digital number min

0

0

0

0

Digital number max

200

200

250

250

Scaling

1/200

1/200

10/250

1/250

Offset

0.0

0.0

0.0

-0.08

No data

255

255

255

255

Data type

Byte

Byte

Byte

Byte

Saturation min*

/

/

/

-1.0

Saturation max**

/

/

/

1.0

Table: Sentinel-2 derived vegetation products technical information

*Values between saturation min and physical min will be set to physical min before quantization is applied.

** Values between saturation max and physical max will be set to physical max before quantization is applied.

FAPAR

The Fraction of Absorbed Photosynthetically Active Radiation quantifies the fraction of the solar radiation absorbed by live leaves for the photosynthesis activity. Then, it refers only to the green and alive elements of the canopy. The FAPAR depends on the canopy structure, vegetation element optical properties, atmospheric conditions, and angular configuration. To overcome this latter dependency, a daily integrated FAPAR value is assessed.

The figure below shows the files included in the S2 FAPAR product.

Figure: S2 FAPAR product file list

 

LAI

The Leaf Area Index is defined as half the total area of green elements of the canopy per unit horizontal ground area. The satellite-derived value corresponds to the total green LAI of all the canopy layers, including the understory which may represent a very significant contribution, particularly for forests. Practically, the LAI quantifies the thickness of the vegetation cover.

The figure below shows the files included in the S2 LAI product.

Figure: S2 LAI product file list

FCOVER

The Fraction of Vegetation Cover corresponds to the fraction of ground covered by green vegetation. Practically, it quantifies the spatial extent of the vegetation. Because it is independent from the illumination direction and it is sensitive to the vegetation amount, fCover is a very good candidate for the replacement of classical vegetation indices for the monitoring of ecosystems.

The figure below shows the files included in the S2 fCover product.

Figure: S2 FCOVER product file list

 

NDVI

The Normalized Difference Vegetation Index is an indicator of the greenness of the biomes. As such, it is closely linked to the FAPAR. More information on the NDVI can be found in section 2.6.

The figure below shows the files included in the S2 NDVI product.

Figure: S2 NDVI product file list

What is the file format of the Sentinel-2 radiometric products?

All Sentinel-2 image files are delivered in the GeoTIFF format. The accompanied metadata file is in XML format following the INSPIRE metadata standard (ISO19115).

The Sentinel-2 TOC products include several files which are the output of the iCOR processor for the atmospheric correction and of the Sen2Cor processor for the masks (Cloud/Shadow) and the Scene Classification.

The figure below shows the files included in the S2 TOC product.

Figure: S2 TOC product file list

The S2 TOC Spectral Bands span from the visible and the Near Infra-Red to the Short Wave Infra-Red in different resolutions:

  • 4 bands at 10m;
  • 6 bands at 20m;
  • 1 bands at 60m.

The AOT is provided in the native 60m resolution.

Note that B09 and B10 are not delivered as these contain respectively the water vapor and  cirrus bands.

The table below depicts the Spectral bands together with their resolution and Central Wavelength.

Table: List of the S2 TOC Spectral bands

The physical pixel values in the S2 TOC files are converted from floating point values into integers, mainly to reduce the size of the files.  Table 4 lists the technical information of the Sentinel-2 TOC product with information necessary to calculate the physical values from the digital numbers available in the files. The physical number can be defined by using the following formula:

Physical Number= Scaling * Digital Number + offset.

Technical information on the S2 TOC values

AOT

Physical min

-1.0

0.00

Physical max

1.0

2.5

Digital number min

-10000

0

Digital number max

10000

250

Scaling

1.0/10000

1.0/10000.0

Offset

0.0

0.0

No data

32767

32767

Data type

Int16

Int16

Table: Sentinel-2 TOC technical information

What are the general naming conventions and format of the Sentinel-2 products?

The naming conventions for the Sentinel 2 products are:

 

<MISSION>_<DATE>T<UTCTIME>Z_<GRIDID>_<CONTENT>_<RESOLUTION>_V<VERSION>_<OP>

 

With:

<MISSION>                       Mission ID (S2A/S2B)

<DATE>                            Start date of the segment identifier (format: YYYYMMDD)

<TIME>                             Start time (UTC) of the segment (format: hhmmss)

<GRIDID>                          ID of the granule/tile in UTM/WGS84 projection

<CONTENT>                      Content of the file. For details see the table below.

<RESOLUTION>                               Resolution of the product/file (not always available).

<VERSION>                                      Version identifier, three digits starting from ‘101’ for the first operational version

<OP>                                                   Optional, only used for the QuickLook (QL).

 

Example: S2A_20160908T105416Z_31UFS_FAPAR_10M_V101

Content

Description

TOC

Top Of Canopy total product

TOC-B01

Top of Canopy B01

TOC-B02

Top of Canopy B02

TOC-B03

Top of Canopy B03

TOC-B04

Top of Canopy B04

TOC-B05

Top of Canopy B05

TOC-B06

Top of Canopy B06

TOC-B07

Top of Canopy B07

TOC-B08

Top of Canopy B08

TOC-B08A

Top of Canopy B08A

TOC-B11

Top of Canopy B11

AOT

Aerosol Optical Thickness

CLOUDMASK

Cloudmask file

SHADOWMASK

Shadowmask file

SCENECLASSIFICATION

Scene Classification file

FAPAR

Fraction of Absorbed Photosynthetically Active Radiation

FCOVER

Fraction of green Vegetation Cover

LAI

Leaf Area Index

NDVI

Normalized Difference Vegetation Index

Table: Possible values for the <CONTENT>

Which region of interest is available now for the Sentinel-2 products?

Initially, VITO generates the Sentinel-2 products covering Belgium. The following grids represent the area of Belgium:   31UET, 31UFT, 31UDS, 31UES, 31UFS, 31UGS, 31UER, 31UFR, 31UGR, 31UFQ. Only those tiles will be downloaded and processed in the initial phase of the VITO Sentinel-2 project. The figure below outlines in red the grids covering Belgium which  are downloaded and processed by VITO. The map is based on an OpenStreetMap layer.

Figure:  Sentinel-2 grids covering Belgium

 

Sentinel 1

What is the file format of the Sentinel-1 products?

  • SENTINEL data products are distributed using a SENTINEL-specific variation of the Standard Archive Format for Europe (SAFE) format specification. The SAFE format has been designed to act as a common format for archiving and conveying data within ESA Earth Observation archiving facilities. SAFE was recommended for the harmonisation of the GMES missions by the GMES Product Harmonisation Study.

The SENTINEL-SAFE format wraps a folder containing image data in a binary data format and product metadata in XML. This flexibility allows the format to be scalable enough to represent all levels of SENTINEL products.

A SENTINEL product refers to a directory folder that contains a collection of information. It includes:

  • a 'manifest.safe' file which holds the general product information in XML
  • subfolders for measurement datasets containing image data in various binary formats
  • a preview folder containing 'quicklooks' in PNG format, Google Earth overlays in KML format and HTML preview files
  • an annotation folder containing the product metadata in XML as well as calibration data
  • a support folder containing the XML schemes describing the product XML.

(Referentie: https://sentinel.esa.int/web/sentinel/user-guides/sentinel-1-sar/data-formats/safe-specification)

Which region of interest is available now for the Sentinel-1 products?

What are the processing steps for the Sentinel-1 GRD sigma0 product?

A full description can be found in the TERRASCOPE SENTINEL-1 ALGORITHM THEORETICAL BASE DOCUMENT (ATBD).

Notebooks

Leaflet support in older (<3.6) Python versions?

In the jupyterlab interface leaflet support is broken for Python35. However, it can still be used when you switch to the Jupyter classic interface (via Help > Launch classic notebook). Another option is migrating to Python36 where leaflet works in both the lab and classic interfaces.

How to work with Notebooks?

Notebooks are only enabled for users on specific demand. If you want notebook access, use the 'Request notebook access' form.

On https://notebooks.terrascope.be, you can login to the notebooks application with your Terrascope username and password, which is the same account as used on www.vito-eodata.be or PROBA-V MEP.

Notebook samples

By default, some sample notebooks are provided under folder Private/notebook-samples. You may want to run git pull inside this directory to get the latest version.

The samples are subdivided in two sections:

  • datasets: notebooks using either Sentinel or PROBA-V data
  • tools: notebooks using tools provided by the Terrascope platform (e.g. SNAP, R, catalogclient, etc ...)

Sharing notebooks

Notebooks can be shared with other Terrascope users by moving them to the Public folder. Your notebook then becomes accessible for other users under folder /data/users/<your_username>/<path_to_notebook>.

Installing additional packages

You can install additional Python packages. For a Python 2.7 notebook, include a cell like this to install the mpld3 library:

import sys
! pip27 install --user mpld3

The notebook environment also supports opening a terminal, in which this command can be executed as well. Packages are installed in your home directory, which is persistent across notebook restarts.

 

What is a Notebook?

A notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.
It is based on the Open Source Jupyter notebooks application, and tailored to the needs of remote sensing users. Each notebook has direct access to the Terrascope, PROBA-V, SPOT-VGT and Copernicus Global Land datasets.

Developer Guide

How to use Spark for distributed processing?

To speed up processing, it is often desirable to distribute the work over a number of machines. The easiest way to do this is to use the Apache Spark processing framework. The fastest way to get started, is to read the Spark documentation: https://spark.apache.org/docs/2.3.3/quick-start.html.

The Spark version installed on our cluster is 2.3.3 so it is recommended that you stick with this version. It is however not impossible to use a newer version if really needed. Spark is also installed on your virtual machine, so you can run 'spark-submit' from the command line after setting the following 2 environment variables:

export SPARK_MAJOR_VERSION=2
export SPARK_HOME=/usr/hdp/current/spark2-client

To run jobs on the Hadoop cluster, the 'cluster' deploy-mode has to be used, and you need to authenticate with Kerberos. For the authentication, just run 'kinit' on the command line. You will be asked to provide your password. Two other useful commands are 'klist' to show whether you have been authenticated, and 'kdestroy' to clear all authentication information. After some time, your login will expire, so you'll need to run 'kinit' again.

Python Spark example is available which should help you to get started.

Resource management

Spark jobs are being run on a shared processing cluster. The cluster will divide available resources among all running jobs, based on certain parameters.

Memory

To allocate memory to your executors, there are two relevant settings:

The amount of memory available for the Spark 'Java' process: --executor-memory 1G

The amount of memory for your Python or R script: --conf spark.yarn.executor.memoryOverhead=2048

If you need more detailed tuning of the memory managment inside the Java process, you can use: --conf spark.memory.fraction=0.05

Number of parallel jobs

The number of tasks that are processed in parallel can be determined dynamically by spark. Therefore you should use these parameters:

--conf spark.shuffle.service.enabled=true --conf spark.dynamicAllocation.enabled=true

Optionally, you can set upper or lower bounds:
--conf spark.dynamicAllocation.maxExecutors=30 --conf spark.dynamicAllocation.minExecutors=10

If you want a fixed number of executors, use:
--num-executors 10

We don't recommend this, as it reduces the ability of the cluster manager to optimally allocate resources.

Dependencies

A lot of commonly used Python dependencies are preinstalled on the cluster, but in some cases, you want to provide your own.

The first thing you need to do this, is to get a package containing your dependency. PySpark supports zip, egg, or whl packages. The easiest way to get such a package is by using pip:

pip download Flask==1.0.2

This will download the package, and all of its dependencies. Pip will prefer to download a wheel if one is available, but may also return a ".tar.gz" file, which you will need to repackage as zip or wheel.

To repackage a tar.gz as wheel:

tar xzvf package.tar.gz

cd package

python setup.py bdist_wheel

Note that a wheel may contain files that are dependent on the version of Python that you are using, so make sure you use the right (2.7 or 3.5) Python to perform this command.

Once the wheel is available, you can include it in your spark-submit command:

--py-files mypackage.whl

Notifications

If you want to receive a notification (e.g. an email) when the job reaches a final state (succeeded or failed), you can add a SparkListener on the SparkContext for Java or Scala jobs:

SparkContext sc = ... sc.addSparkListener( new SparkListener() { ... @Override public void onApplicationEnd(SparkListenerApplicationEnd applicationEnd) {   // send email   }   ... });

You can also implement a SparkListener and specify the classname when submitting the Spark job:

spark-submit --conf spark.extraListeners=path.to.MySparkListener ...

In PySpark, this is a bit more complicated as you will need to use Py4J:

class PythonSparkListener(object): def onApplicationEnd(self, applicationEnd): // send email   # also implement other onXXX methods class Java: implements = ["org.apache.spark.scheduler.SparkListener"]
sc = SparkContext() sc._gateway.start_callback_server() listener = PythonSparkListener() sc._jsc.sc().addSparkListener(listener) try: # your Spark logic goes here ... finally: sc._gateway.shutdown_callback_server() sc.stop()

In a future release of the JobControl dashboard, we will add the possibility to send an email automatically when the job reaches a final state.

 

How to launch interactive applications: QGis?

Instead of launching applications in a virtual desktop environment, it is often more convenient if you can immediately start an application. The X2Go program explained on the previous page supports this. Just follow the same steps for setting up a connection as described there (MAKE ANCHOR ), but now select 'Published applications' under 'Session type' when you set up your session.

Now click on the circle icon after launching this session as shown here:

Published Apps1

This show a window where you can select an application, clicking start will launch QGis directly, as if it were running on your machine. However, you do have access to the Terrascope EO data archive!

lauching qgis

 

How to get data to the Terrascope platform?

Introduction

Each Terrascope VM user has it's own Public and Private folder, available under /data/users/Public/username and /data/users/Private/username. There is also a link to those folders in your Terrascope VM home folder (/home/username).

The Public folder can be used to share data with other Terrascope users as all Terrascope users can read the data, but it is only writeable by you. The Private folder is only readable and writeable by you. These folders are accessible from the Terrascope user VMs, the Hadoop cluster and the notebooks.

Data can be uploaded by SFTP, as described below.

Importing small amounts of data on the VM

To import data to the user VM that has the size of a couple of Gigabytes or less, the easiest way would be to use an SFTP client.
For Windows, WinSCP is a good choice.
Download WinSCP (or any other SFTP client) and start it up.

Connect WinSCP to our SFTP server: filetransfer.terrascope.be.
The login credentials are the same as your VM, meaning your Terrascope username and password.

In the SFTP server you will find the folders that are shared over the entire Terrascope cluster. Your Private folder will be under /data/users/Private/username and you Public folder will be under /data/users/Public/username.

How to write Python scripts?

Here are some important tips when working with Python.

Recommended versions

We preinstalled and configured Python 2.7 on all user VMs and on the processing cluster. This is currently the default version. Python 3.5 and 3.6 support is also available, on the VMs and the cluster.

To switch to the Python 3.5 environment, you should run:

scl enable rh-python35 bash

Once inside this environment, all commands will default to using Python 3.5.

To use the Python 3.6 environment, you should run it using the Python3.6 binary in your VM

 

Installing packages

Even though we already provide a number of popular Python packages by default, you probably need to install some new ones at some point. We recommend doing this using the pip package manager. For example:

pip install --user owslib

Installs the owslib library. The '--user' argument is required to avoid needing root permissions, and to ensure that the correct Python version is used. Do not use the yum package manager to install packages!

More advanced options are explained here: https://proba-v-mep.esa.int/documentation/manuals/python-package-management

On the processing cluster

To run your code on the cluster, you need to make sure that all dependencies are also available. It is however not possible to install specific packages on the cluster, but feel free to request the installation of a specific package.

If more freedom is needed, it is also possible to submit your dependencies together with your application code. This is simplified if you are already using a Python virtual environment for your development.

At time of writing, these packages are installed on the cluster, and user vms:

distribute, pandas, numpy, scipy, matplotlib, sympy, nose, seaborn, rasterio, requests,python-dateutil, pyproj, netCDF4,Pillow, lxml, gdal

Sample project

Two sample Python/Spark projects are available on Bitbucket; they show how to use Spark (https://spark.apache.org/) for distributed processing on the Terrascope Platform.

The basic code sample implements an (intentionally) very simple computation: for each PROBA-V tile in a given bounding box and time range, a histogram is computed. The results are then summed and printed. The computation of the histograms runs in parallel.

The project's README file includes additional information on how to run it and inspect its output.

The advanced code sample implements a more complex computation: it will calculate the mean of a time series of PROBA-V tiles and output it as a new set of tiles. The tiles are being split up into sub-tiles for increased parallelism. A mean is just an example an operation that can be applied to a time series.

Application specific files

Auxiliary data

When your script depends on specific files, there are a few options:

Use the --files and --archives options of the spark-submit command to put local files in the working directory of your program, as explained here.
Put the files on a network drive, as explained here.
Put the files on the HDFS shared filesystem, where they can be read directly from Spark, or can be placed into your working directory using --files or --archives.

The second option is mostly recommended for larger files. The other options are quite convenient for distributing various smaller files. Spark has a caching mechanism in place to avoid unneeded file transfers across multiple similar runs.

Compiled binaries

When using compiled code, such as C++, code compiled on your VM will also work on the cluster. Hence you can safely use the compilation instructions of your tool to compile a binary, and then distribute it in a similar way as any other type of auxiliary data. In your script, you may need to configure environment variables such as PATH and LD_LIBRARY_PATH to ensure that your binaries can be found and are being used.

 

Virtual Machines

How to manage user defined Aliases and Environment Variables?

Since the ~/.bashrc file is automatically managed, changes to this file will be reset.

To make sure users can still create aliases or set environment variables there is a file ~/.user_aliases which can be used for this reason. If this file doesn't exist yet, it can be created.

How to enable file sharing for your User VM?

X2GO also provides file sharing support between your own PC and the User VM. On the 'Shared folders' tab, add the local folder(s) you want to become available in your User VM.

Typically, you will want the folder to become available when the X2GO session is started. In that case, select the 'automount' option.

The local folder will be mounted using Fuse. The hard part is to locate the folder on the User VM on which the local folder is mounted. The easiest way to find out is to run the following command on the User VM:

[daemsd@daemsdvm ~]$ mount | grep x2go
dirkd@127.0.0.1:/home/dirkd/Documents on /tmp/.x2go-daemsd/media/disk/_home_dirkd_Documents type fuse.sshfs (rw,nosuid,nodev,relatime,user_id=30320,group_id=631600014,default_permissions)

This shows that the folder is mounted on /tmp/.x2go-daemsd/media/disk/_home_dirkd_Documents.

Note: the Windows X2GO client seems to generate the wrong type of SSH keys during installation (DSA iso RSA). If you experience problems to get filesharing support on Windows working this could be the cause. You can fix this by copying the DSA keys under C:\Users\<username>\.x2go\etc and replacing 'dsa' by 'rsa' in the filename. If you still experience problems, you can try to uninstall X2GO, remove the .x2go folder in your home directory and install the latest X2GO version. During the installation make sure you enable the debug output option. X2GO can now be started in debug mode, providing detailed log messages which are useful for us to resolve your problem.

What is the VM backup policy?

Your user virtual machine is not backed up!

In line with other cloud environments, virtual machines should not be regarded as being persistent. This means that all data in your home directory and other system directories may be lost in case of a system failure.

To solve this, here are some suggestions:

  • Use version control for anything really important.
  • Use the 'Public' and 'Private' folder in your home directory, these are on a shared filesystem that is more persistent than regular folders, but also do not have snapshots. So if you remove or break a file, it would still be lost.

How to access your Terrascope VM?

There are two ways to get access to your Terrascope VM: either by accessing the graphical desktop of your VM, or through the command line.

Graphical access is the easiest option if you are not comfortable with using a Linux terminal, but requires a stable and reasonably fast internet connection to the Terrascope cloud.

You can sign in using your Terrascope portal account. Make sure you use lowercase characters for your username: e.g. 'Username90' should be transformed to 'username90'.

 

How to access your VM is explained in the following videos.

 

Commandline access

Commandline access is provided though SSH. Download and install an SSH client (e.g. PuTTY for a Windows OS) if needed.

On Linux you can use the command: ssh -p port username@mep.vgt.vito.be

If your ssh connection gets terminated or 'hangs' after a while of not using the connection, a fix could be to make a change in your local ssh settings.

By adding the 'ServerAliveInterval 60' to your ssh config and restarting your ssh daemon, the client will send a null packet to the server every 60 seconds to keep the connection alive. The '60' in the line is the amount of time in between each null packet.

Desktop access

The following steps are required to access the desktop

X2Go client installation

Download and install the X2Go client program for your operating system, as described here: http://wiki.x2go.org/doku.php/doc:installation:x2goclient

Create Toolbox VM session

Start the X2Go client and create a new session. The host, and ssh port is provided to you when requesting a new toolbox. The login and password is the same as for the Terrascope portal.

Make sure to select 'XFCE' as the session type at the bottom of the window. Other sessions types will not work unless you install them manually. XFCE is a lightweight desktop environment, which is suitable for use on virtual machines.

Also change the compression method to '4k-png' on the 'Connection' tab:

When this is done, you can click on Ok, and should be able to log into your VM. When successful, you end up in a desktop environment that looks like this:

The 'data' folder links to the entire Terrascope EO archive. The 'tiffdata' links to all the tiff files.

 

What are the costs for a Terrascope VM?

Your VM with standard configuration (4 CPU,8GB RAM, 4GB SWAP, 80GB Root Disk) can be provided for free.

If for specific projects or operational services more resources are needed in terms of CPU, RAM or storage, pleasse do not hesitate to contact VITO to see how we can further help you achieving your goals.

What is a user virtual machine?

With the user Virtual Machine (VM), a developer or researcher can access a Virtual Research Environment with access to the complete Terrascope data archive and a powerful set of tools and libraries to work with the data (e.g. SNAP toolbox, GRASS GIS, QGIS) or to develop-debug test applications (R, Python or Java).

The user Virtual Machine:

  • comes with several pre-installed commandline tools, desktop applications and developer tools which are useful for exploitation of  the data available in Terrascope  (e.g. GDAL, QGIS, GRASS GIS, SNAP, Python, etc ...).
  • provides access to the full Terrascope EO data archive. 
  • targets an audience of scientists and developers developing applications which use Terrascope EO data. After the prototyping phase, the Terrascope processing environment can be used for larger scale processing.

How to request a new Terrascope VM?

Note that you need to be signed in on the Terrascope portal using your Terrascope account or your PROBA-V distribution portal account to request an OpenStack Virtual Machine (VM).

After receiving your request for a VM, the Terrascope team will validate your request and provide you feedback within two working days and a VM when your request is granted.

You will receive an e-mail explaining how to access your personal VM.

Your VM with standard configuration (4 CPU,8GB RAM, 4GB SWAP, 80GB Root Disk) will be provided for free. If for specific projects or operational services more resources are needed in terms of CPU, RAM or storage, please do not hesitate to contact us at info@terrascope.be to see how we can further help you achieving your goals.

The Terrascope VM runs on the OpenStack private cloud hosted by VITO.

How to access your VM is explained in the following videos.

stay up to date!
subscribeto our newsletter
new perspectives

Stay up to date!