Terrascope logo Proba-V MEP logo
This forum applies to Terrascope and Proba-V MEP. Please note that the forum is in English only.
Please log in using the link in the top menu to add new forum topics or comments.

Docker image from private Docker Hub

Hello, I created a Docker image following the guide, and uploaded it to a private Docker Hub registry account. When I try to send a script via spark-submit, I get an error with the following message. If I try a public image I don't get this problem, so I guess I need to enter my login information somewhere. Is it possible to do this? Or do I need to upload my images somewhere else?   Application report for application_1640081147608_3034 (state: FAILED)   21/12/22 14:22:32 INFO Client:          client token: N/A          diagnostics: Application application_1640081147608_3034 failed 1 times (global limit =2; local limit is =1) due to AM Container for appattempt_1640081147608_3034_000001 exited   with  exitCode: 7   Failing this attempt.Diagnostics: [2021-12-22 14:22:26.758]Exception from container-launch.   Container id: container_e5006_1640081147608_3034_01_000001   Exit code: 7   Exception message: Launch container failed   Shell error output: image: registry.hub.docker.com/flucio/gedap is not trusted.   Disable mount volume for untrusted image   image: registry.hub.docker.com/flucio/gedap is not trusted.   Disable mount volume for untrusted image   image: registry.hub.docker.com/flucio/gedap is not trusted.   Disable cap-add for untrusted image   Docker capability disabled for untrusted image   Unable to find image 'registry.hub.docker.com/flucio/gedap:latest' locally   /usr/bin/docker: Error response from daemon: pull access denied for registry.hub.docker.com/flucio/gedap, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.   See '/usr/bin/docker run --help'.   Shell output: main : command provided 4   main : run as user is luciof   main : requested yarn user is luciof   Creating script paths...   Creating local dirs...   Getting exit code file...   Changing effective user to root...   Wrote the exit code 7 to /data3/hadoop/yarn/local/nmPrivate/application_1640081147608_3034/container_e5006_1640081147608_3034_01_000001   /container_e5006_1640081147608_3034_01_000001.pid.exitcode       [2021-12-22 14:22:26.780]Container exited with a non-zero exit code 7.   [2021-12-22 14:22:26.782]Container exited with a non-zero exit code 7.   For more detailed output, check the application tracking page: https://epod-master2.vgt.vito.be:8090/cluster/app/application_1640081147608_3034 Then click on links to logs of each attempt.. Failing the application.            ApplicationMaster host: N/A            ApplicationMaster RPC port: -1          queue: default            start time: 1640179342928            final status: FAILED            tracking URL: https://epod-master2.vgt.vito.be:8090/cluster/app/application_1640081147608_3034            user: luciof   21/12/22 14:22:32 INFO Client: Deleted staging directory hdfs://hacluster/user/luciof/.sparkStaging/application_1640081147608_3034   Exception in thread "main" org.apache.spark.SparkException: Application application_1640081147608_3034 finished with failed status           at org.apache.spark.deploy.yarn.Client.run(Client.scala:1269)           at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1627)           at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904)           at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)           at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Comments

Dear Luciof, Due to security restrictions, we only allow trusted registries on our processing cluster. We will contact you in person to work out a suitable solution for your project.
stay up to date!
subscribeto our newsletter
new perspectives

Stay up to date!