Terrascope logo Proba-V MEP logo
This forum applies to Terrascope and Proba-V MEP. Please note that the forum is in English only.
Please log in using the link in the top menu to add new forum topics or comments.

Virtual machines

VM not Rebotting

When I started my VM I get this message   disks on harjit9090 is UNKNOWN! Info:    Remote Icinga instance 'harjit9090.uservm.rscloud.vito.be' is not connected to 'icinga.intern.vgt.vito.be' When:    2022-05-13 15:25:36 +0200 Service: disks Host:    harjit9090.uservm.rscloud.vito.be Groups:   uservms, uservm_prd IPv4:    192.168.116.80 Comment by icinga-admin:   The root disk on your PROBA-V MEP/Terrascope is almost full. Please use the $HOME/Private and $HOME/Public folders to store your data. If the root disk is full, you won't be able to login anymore.

use gfortran compiler with netcdf

Hi there,  I am tyring to use gfortran to compile my FORTRAN code with the options like gfortran test.f90 -o test_nc.exe -I/usr/include -lnetcdff I got error of  Fatal Error: Can't open module file 'netcdf.mod' for reading at (1): No such file or directory   I think this relates to the installed version, So I checked it with the command nc-config --all And I got the following, Is it possible to have a netCDF version that also supports gFortran?  Thx!   This netCDF 4.3.3.1 has been built with the following features:   --cc        -> gcc   --cflags    ->  -I/usr/include -I/usr/include/hdf   --libs      ->   --has-c++   -> no   --cxx       ->   --has-c++4  -> no   --cxx4      ->   --fc        ->   --fflags    ->   --flibs     ->   --has-f90   -> no   --has-dap   -> yes   --has-nc2   -> yes   --has-nc4   -> yes   --has-hdf5  -> yes   --has-hdf4  -> yes   --has-pnetcdf-> no   --prefix    -> /usr   --includedir-> /usr/include   --version   -> netCDF 4.3.3.1

Docker image from private Docker Hub

Hello, I created a Docker image following the guide, and uploaded it to a private Docker Hub registry account. When I try to send a script via spark-submit, I get an error with the following message. If I try a public image I don't get this problem, so I guess I need to enter my login information somewhere. Is it possible to do this? Or do I need to upload my images somewhere else?   Application report for application_1640081147608_3034 (state: FAILED)   21/12/22 14:22:32 INFO Client:          client token: N/A          diagnostics: Application application_1640081147608_3034 failed 1 times (global limit =2; local limit is =1) due to AM Container for appattempt_1640081147608_3034_000001 exited   with  exitCode: 7   Failing this attempt.Diagnostics: [2021-12-22 14:22:26.758]Exception from container-launch.   Container id: container_e5006_1640081147608_3034_01_000001   Exit code: 7   Exception message: Launch container failed   Shell error output: image: registry.hub.docker.com/flucio/gedap is not trusted.   Disable mount volume for untrusted image   image: registry.hub.docker.com/flucio/gedap is not trusted.   Disable mount volume for untrusted image   image: registry.hub.docker.com/flucio/gedap is not trusted.   Disable cap-add for untrusted image   Docker capability disabled for untrusted image   Unable to find image 'registry.hub.docker.com/flucio/gedap:latest' locally   /usr/bin/docker: Error response from daemon: pull access denied for registry.hub.docker.com/flucio/gedap, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.   See '/usr/bin/docker run --help'.   Shell output: main : command provided 4   main : run as user is luciof   main : requested yarn user is luciof   Creating script paths...   Creating local dirs...   Getting exit code file...   Changing effective user to root...   Wrote the exit code 7 to /data3/hadoop/yarn/local/nmPrivate/application_1640081147608_3034/container_e5006_1640081147608_3034_01_000001   /container_e5006_1640081147608_3034_01_000001.pid.exitcode       [2021-12-22 14:22:26.780]Container exited with a non-zero exit code 7.   [2021-12-22 14:22:26.782]Container exited with a non-zero exit code 7.   For more detailed output, check the application tracking page: https://epod-master2.vgt.vito.be:8090/cluster/app/application_1640081147608_3034 Then click on links to logs of each attempt.. Failing the application.            ApplicationMaster host: N/A            ApplicationMaster RPC port: -1          queue: default            start time: 1640179342928            final status: FAILED            tracking URL: https://epod-master2.vgt.vito.be:8090/cluster/app/application_1640081147608_3034            user: luciof   21/12/22 14:22:32 INFO Client: Deleted staging directory hdfs://hacluster/user/luciof/.sparkStaging/application_1640081147608_3034   Exception in thread "main" org.apache.spark.SparkException: Application application_1640081147608_3034 finished with failed status           at org.apache.spark.deploy.yarn.Client.run(Client.scala:1269)           at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1627)           at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904)           at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)           at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Problems getting access to my VM with X2Go

Hello everybody, I am experiencing some issues with getting access to my VM through X2Go. I have followed the steps on https://docs.terrascope.be/#/Developers/VirtualEnvironments/VirtualMachines?id=setting-up-x2go-to-use-the-key-pair but when I am logging in to start a session, I receive the error "The remote proxy closed the connection while negotiating the session. This may be due to the wrong authentication credentials passed to the server." I am however sure that I am using the right username and password because I have no issue getting access through putty. I also tried to manually delete my X2Go session through putty with x2goterminate-session, but I still receive the error.

installing softwares in VM

Hello, Is it possible to  install any other software if required? Thanks!

Problems with jblas when running SNAP

Hello everyone, <br><br> I'm trying to run processes from the Sentinel 1 toolbox of SNAP in the Terrascope VM, but I keep getting this or similar errors: `RuntimeError: Executing processing graph org.jblas.NativeBlas.dgemm(CCIIID[DII[DIID[DII)V`. A corresponding forum thread [1] suggests there is a problem with the SNAP version or the package libgfortran3, so I updated SNAP to 8.0.5 and tried installing available libgfortran packages (`sudo yum install -y libgfortran.x86_64 libgfortran-static.x86_64 libgfortran4.x86_64 libgfortran5.x86_64`), but it didn't help. <br><br> This is the case for trying to run a coherence process via the SNAP GUI, and also for running SNAP via the gpt tool of the Python pyroSAR package. <br><br> I'm wondering if anyone has experience with the S1tbx on the VM, or if this error ever occurred before? If not, it would be greatly appreciated if someone would take the time to try and reproduce my error. I put my workflow here [2]. When I load this graph into the graph tool of SNAP, it doesn't run and throws an error for the coherence process (see error message above). Does it do that on other machines as well? <br><br> Thank you in advance and best regards, <br><br> Jonathan <br><br> [1] https://forum.step.esa.int/t/error-in-sar-image-corregistration-error-nodeld-createstack-org-jblas-nativeblas-dgemm-cciiid-dii-diid-dii-v/12023 <br><br> [2] https://github.com/jonathom/coherence-docs/blob/master/SNAP_workflow.xml <br><br> P.S.: I tried to structure this via html tags, ignore if it didn't work..

Locate pg_config to install pyroSAR

Hello all, I am trying to install the pyroSAR package via `pip install --user pyroSAR` to quickly process Sentinel 1 SLC data via SNAP in a Terrascope VM. The installation produces the following error: ```     Error: pg_config executable not found.          pg_config is required to build psycopg2 from source.  Please add the directory     containing pg_config to the $PATH or specify the full executable path with the     option:              python setup.py build_ext --pg-config /path/to/pg_config build ...          or with the pg_config option in 'setup.cfg'.          If you prefer to avoid building psycopg2 from source, please install the PyPI     'psycopg2-binary' package instead. ``` Installing psycopg2-binary didn't lead anywhere so I'll have to add the path to `pg_config`. Except I can't find it anywhere (also due to permission restrictions) / am not even sure it is installed. Could you help me identify the file? If the file isn't there, forum people [1] suggested to install libpq-dev, but I don't have the rights to install packages, do I? Hope you can help me with this, Jonathan [1] https://stackoverflow.com/questions/11618898/pg-config-executable-not-found

Copernicus High Resolution Vegetation Phenology and Productivity

Will the Copernicus Biophysical Parameters High Resolution Vegetation Phenology and Productivity products be available on Terrascope VMs? I am particularly interested in the HR-VPP Seasonal Trajectories and could not find it in the available datasets.

About web browser

Hello,   How can I make the firefox run smooth in the machine? It works very slow and the screen can be scrolled very slowly  in the browser.   Thankyou

Using SSH key authentication for Virtual Machines

It is highly recommended to use SSH key authentication on your user VM. Please read how to do this here: https://docs.terrascope.be/#/Developers/VirtualEnvironments/VirtualMachines?id=using-ssh-key-authentication
stay up to date!
subscribeto our newsletter
new perspectives

Stay up to date!